[whatwg] Offscreen canvas (or canvas for web workers).
mjs at apple.com
Wed Feb 24 03:12:26 PST 2010
On Feb 24, 2010, at 1:35 AM, Jonas Sicking wrote:
> On Wed, Feb 24, 2010 at 12:14 AM, Maciej Stachowiak <mjs at apple.com>
>> On Feb 24, 2010, at 12:09 AM, Maciej Stachowiak wrote:
>> On Feb 23, 2010, at 10:04 PM, Jonas Sicking wrote:
>> On Tue, Feb 23, 2010 at 9:57 PM, Maciej Stachowiak <mjs at apple.com>
>> - Raytracing a complex scene at high resolution.
>> - Drawing a highly zoomed in high resolution portion of the
>> Mandelbrot set.
>> To be fair though, you could compute the pixels for those with just
>> there is no need to have a graphics context type abstraction.
>> I did not think it was possible to write a proper raytracer for
>> content all as a shader program, but I do not know enough about 3D
>> to know if that demo is correct or if that is possible in general.
>> conceded though.
> The big thing that GLSL is lacking is a stack, making it impossible to
> recurse properly. This isn't a huge problem to work around, though can
> result in ugly code. Especially if you want to support transparent
> objects, in which case you'll essentially have to unroll recursion
> manually by copying code.
> This of course makes it impossible to recurse to arbitrary levels,
> though that is something you generally don't want to do anyway in a
> ray tracer since it costs a lot of CPU (or in this case GPU) cycles
> for very little visual gain.
>> Neither of examples you posted seems to have the ability to zoom
>> in, so I
>> don't think they show anything about doing this to extremely high
>> But I see your point that much of this particular computation can
>> be done on
>> the GPU, up to probably quite high limits. Replace this example
>> with your
>> choice of non-data-parallel computation.
>> Following the links, this demo does do zoom, but it will go all
>> jaggy past a
>> certain zoom level, presumably due to limitations of GLSL.
> Indeed. Zooming is no problem at all and doesn't require any heavier
> math than what is done in my demo.
Zooming does require more iterations to get an accurate edge, and
WebGL has to limit your loop cycles at some point to prevent locking
up the GPU. But of course once you are at that level it would be
pretty darn slow on a CPU. I have seen mandelbrot demos that allow
essentially arbitrary zoom (or at least, the limit would be the size
of your RAM, not the size of a float).
> I experimented with allowing the
> animations to be stopped at arbitrary points and then allowing
> zooming. However it required more UI work than I was interested in
> doing at the time.
> The reason the demo goes jaggy after a while is due to limitations in
> IEEE 754 floats.
On the CPU you could go past that if you cared to by coding your own
high precision math. But it would be quite slow.
> But I should clarify that my point wasn't that WebGL makes
> off-main-thread graphics processing unneeded. I just thought it was
> funny that the two examples you brought up were exactly the things
> that I had experimented with. Although I wouldn't be surprised if a
> lot of the image processing effects that people want to do can be
> written as shader programs. Would definitely be interesting to know if
> WebGL could be supported on workers.
I'm very much interested in the possibility of WebGL on Workers, which
is why I suggested, when reviewing early drafts of this proposal, that
the object should be an OffscreenCanvas rather than a special Worker-
only version of a 2d context (with implied built-in buffer). This
makes it possible to extend it to include WebGL.
More information about the whatwg