[whatwg] Counterproposal for canvas in workers

Robert O'Callahan robert at ocallahan.org
Thu Oct 17 20:21:49 PDT 2013


On Fri, Oct 18, 2013 at 4:10 PM, Rik Cabanier <cabanier at gmail.com> wrote:

> They would still have to wait for each other so the images are composited
> in-order. If you don't care about that, the 'synchronized' option would let
> you draw as soon as you exit the task (which is how Chrome always draws
> since it's faster)
>

What do you mean "wait for each other"? You only have to wait until they're
all finished. The cost of actually compositing the images is low.

In fact, an implementation could choose to take the deferred-drawing
> > approach instead. You would queue up drawing commands in the WorkerCanvas
> > (or the drawing context), and then transferToImageBitmap would not
> > immediately render but produce an ImageBitmap implementation
> encapsulating
> > the list of drawing commands to be drawn later, wherever/whenever that
> > ImageBitmap ended up being used. I think for commit() the implementation
> > would always want to force rasterization on the worker (or possibly some
> > dedicated canvas-rendering thread); you could forward a list of drawing
> > commands to the compositor thread for rasterization but I don't think
> > there's any reason to do that (and some good reasons not to).
> >
>
> Can you tell me how you can ensure that you don't do too much work? Drawing
> in a continuous loop using 'Commit' would waste a lot of resources.
>

How to throttle drawing of frames using "commit()" is a completely separate
issue. Any API that allows direct publishing of frames from workers to the
compositor will have to deal with it, in roughly the same way.

Rob
-- 
Jtehsauts  tshaei dS,o n" Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o  Whhei csha iids  teoa
stiheer :p atroa lsyazye,d  'mYaonu,r  "sGients  uapr,e  tfaokreg iyvoeunr,
'm aotr  atnod  sgaoy ,h o'mGee.t"  uTph eann dt hwea lmka'n?  gBoutt  uIp
waanndt  wyeonut  thoo mken.o w  *
*



More information about the whatwg mailing list