[whatwg] Counterproposal for canvas in workers

Rik Cabanier cabanier at gmail.com
Thu Oct 17 15:14:14 PDT 2013


On Thu, Oct 17, 2013 at 3:01 PM, Glenn Maynard <glenn at zewt.org> wrote:

> On Thu, Oct 17, 2013 at 4:50 PM, Rik Cabanier <cabanier at gmail.com> wrote:
>
>> It seemed like that proposal was harder. Synchronization with the main
>>
> drawing thread seemed and the continuous committing seemed difficult too.
>>
>
> Have implementors said that synchronizing the flip is (unreasonably) hard
> to implement?  (I'm not an implementor, but this proposal feels
> unimplementable to me, or at least catastrophically difficult for WebGL.
>

That could be. I'm not all that familiar with WebGL.


> Compositors are often already threaded, so synchronizing a buffer flip
> with the compositor doesn't seem too far out there.)
>

This proposal implies an extra buffer for the 2d context. My proposal
doesn't require that so it's more memory efficient + you can draw in
parallel.


>
>
>> In addition, Ken wanted multiple workers access the same canvas which I
>> didn't see addressed (unless I missed it).
>>
>
> I don't remember "multiple workers accessing the same canvas" and I'm not
> quite sure what it means.  I do remember "a single (WebGL) context
> rendering to multiple canvases".  Is that what you're thinking of?
>

I went back over the history and that was indeed his use case.


>
> On Thu, Oct 17, 2013 at 4:51 PM, Rik Cabanier <cabanier at gmail.com> wrote:
>
>> Thanks Glenn!
>> With that info, will there ever be a way to use WebGL in different
>> workers but going to the same webgl context?
>>
>
> Sorry, which use case is this for?  I'm not sure why you'd want to do
> that, and it sounds like it would expose thread-safety issues to the
> platform.  (I'm not sure if you mean the same thing here and above--they
> sound similar, but you said "canvas" in one place and "WebGL context" in
> the other.)
>
> (Sorry if I'm forgetting things, the subject has been busy and a little
> bit noisy...)
>

Yes. Sorry to add to the noise :-)



More information about the whatwg mailing list