[whatwg] Offscreen canvas (or canvas for web workers).
Andrew Grieve
agrieve at google.com
Wed Feb 24 07:45:44 PST 2010
Regarding the three steps of offscreen rendering:
1. Draw stuff
2. Ship pixels to main thread
3. Draw them on the screen.
How would you do #2 efficiently? If you used toDataUr*l*(), then you have to
encode a png on one side and then decode the png on the main thread. I think
we might want to add some sort of API for blitting directly from an
offscreen canvas to an onscreen one. Perhaps via a canvas ID.
Andrew
On Wed, Feb 24, 2010 at 6:12 AM, Maciej Stachowiak <mjs at apple.com> wrote:
>
> On Feb 24, 2010, at 1:35 AM, Jonas Sicking wrote:
>
> On Wed, Feb 24, 2010 at 12:14 AM, Maciej Stachowiak <mjs at apple.com>
>> wrote:
>>
>>>
>>> On Feb 24, 2010, at 12:09 AM, Maciej Stachowiak wrote:
>>>
>>> On Feb 23, 2010, at 10:04 PM, Jonas Sicking wrote:
>>>
>>> On Tue, Feb 23, 2010 at 9:57 PM, Maciej Stachowiak <mjs at apple.com>
>>> wrote:
>>>
>>> - Raytracing a complex scene at high resolution.
>>>
>>> - Drawing a highly zoomed in high resolution portion of the Mandelbrot
>>> set.
>>>
>>> To be fair though, you could compute the pixels for those with just math,
>>>
>>> there is no need to have a graphics context type abstraction.
>>>
>>> http://people.mozilla.com/~sicking/webgl/ray.html
>>>
>>> I did not think it was possible to write a proper raytracer for arbitrary
>>> content all as a shader program, but I do not know enough about 3D
>>> graphics
>>> to know if that demo is correct or if that is possible in general. Point
>>> conceded though.
>>>
>>
>> The big thing that GLSL is lacking is a stack, making it impossible to
>> recurse properly. This isn't a huge problem to work around, though can
>> result in ugly code. Especially if you want to support transparent
>> objects, in which case you'll essentially have to unroll recursion
>> manually by copying code.
>>
>> This of course makes it impossible to recurse to arbitrary levels,
>> though that is something you generally don't want to do anyway in a
>> ray tracer since it costs a lot of CPU (or in this case GPU) cycles
>> for very little visual gain.
>>
>> http://people.mozilla.com/~sicking/webgl/juliaanim.html
>>> http://people.mozilla.com/~sicking/webgl/mandjulia.html
>>>
>>> Neither of examples you posted seems to have the ability to zoom in, so I
>>> don't think they show anything about doing this to extremely high
>>> accuracy.
>>> But I see your point that much of this particular computation can be done
>>> on
>>> the GPU, up to probably quite high limits. Replace this example with your
>>> choice of non-data-parallel computation.
>>>
>>> Following the links, this demo does do zoom, but it will go all jaggy
>>> past a
>>> certain zoom level, presumably due to limitations of GLSL.
>>> http://learningwebgl.com/lessons/example01/?
>>>
>>
>> Indeed. Zooming is no problem at all and doesn't require any heavier
>> math than what is done in my demo.
>>
>
> Zooming does require more iterations to get an accurate edge, and WebGL has
> to limit your loop cycles at some point to prevent locking up the GPU. But
> of course once you are at that level it would be pretty darn slow on a CPU.
> I have seen mandelbrot demos that allow essentially arbitrary zoom (or at
> least, the limit would be the size of your RAM, not the size of a float).
>
>
> I experimented with allowing the
>> animations to be stopped at arbitrary points and then allowing
>> zooming. However it required more UI work than I was interested in
>> doing at the time.
>>
>> The reason the demo goes jaggy after a while is due to limitations in
>> IEEE 754 floats.
>>
>
> On the CPU you could go past that if you cared to by coding your own high
> precision math. But it would be quite slow.
>
>
>
>> But I should clarify that my point wasn't that WebGL makes
>> off-main-thread graphics processing unneeded. I just thought it was
>> funny that the two examples you brought up were exactly the things
>> that I had experimented with. Although I wouldn't be surprised if a
>> lot of the image processing effects that people want to do can be
>> written as shader programs. Would definitely be interesting to know if
>> WebGL could be supported on workers.
>>
>
> I'm very much interested in the possibility of WebGL on Workers, which is
> why I suggested, when reviewing early drafts of this proposal, that the
> object should be an OffscreenCanvas rather than a special Worker-only
> version of a 2d context (with implied built-in buffer). This makes it
> possible to extend it to include WebGL.
>
> Regards,
> Maciej
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20100224/5a6ae3f1/attachment-0002.htm>
More information about the whatwg
mailing list