[whatwg] Counterproposal for canvas in workers
glenn at zewt.org
Fri Oct 18 07:17:36 PDT 2013
On Thu, Oct 17, 2013 at 10:25 PM, Robert O'Callahan <robert at ocallahan.org>wrote:
> On Fri, Oct 18, 2013 at 3:10 PM, Glenn Maynard <glenn at zewt.org> wrote:
>> "transferToImageBuffer" looks like it would create a new ImageBuffer for
>> each frame, so you'd need to add a close() method to make sure they don't
>> accumulate due to GC lag,
> That's a good point. We will need something like that. It would only
> neuter that thread's (main thread or worker thread) version of the
But don't forget that this is a cost to authors, who now have to .close()
the object. If they forget, or don't know they need to do that, or miss
some code paths, then there are no blatant side-effects--things are just
mysteriously slower, and probably with more of an effect in some
implementations than others (which is never good). With attachToCanvas,
this can't happen.
and it seems like turning this into a fast buffer swap under the hood
>> would be harder.
> I don't see why.
To me it seems obviously more complicated, but I guess I'll leave that
evaluation to implementors.
> Also, with the "transferToImageBuffer" approach, if you want to render
>> from a worker into multiple canvases in the UI thread, you have to post
>> those ImageBuffers over to the main thread each frame, which has the same
>> (potential) synchronization issues as the transferDrawingBufferToCanvas
> What are those issues? You can do a single postMessage passing a complete
> set of ImageBItmaps.
I don't know the answer to this; my feeling is that posting to the UI
thread and scripts in the UI thread may or may not have
(performance/smoothness) issues, but doing it all in the worker avoids any
potential for this issue.
On Thu, Oct 17, 2013 at 10:48 PM, Rik Cabanier <cabanier at gmail.com> wrote:
> This proposal implies an extra buffer for the 2d context. My proposal
>>> doesn't require that so it's more memory efficient + you can draw in
>> You always need at least two buffers: a back-buffer for drawing and a
>> front-buffer for display (compositing). Otherwise, as soon as you start
>> drawing the next frame, the old frame is gone, so you won't be able to
>> recomposite (on reflow, CSS filter changes, etc). Double-buffering at a
>> minimum is pretty standard, even for native applications (with none of this
>> Web complexity in the way).
> Won't you need another front-buffer for the worker to draw to?
I don't see why. You just use double-buffering as always: the worker draws
to the backbuffer, then the drawing buffer (back-buffer) and the buffer
being displayed (front-buffer) are flipped and you start over. I don't
think there's any difference in this between native OpenGL, today-WebGL,
(I realize I'm looking at this from a WebGL-biased perspective, which
clears the buffer between presentations unless you tell it not to. This is
specifically to allow this sort of fast buffer flipping. 2d canvas doesn't
do that, so to allow copy-free display it'd need a flag like WebGL's
preserveDrawingBuffer = false. This applies to any API trying to get
buffer flipping out of 2d canvas, though--something has to be added or
changed. We don't need to address this here.)
More information about the whatwg