[whatwg] Hardware accelerated canvas
jamesr at google.com
Tue Sep 4 09:30:26 PDT 2012
I believe this ship has already sailed for the most part - several major
browsers (starting with IE9) have shipped GPU based canvas 2d
implementations that simply lose the image buffer on a lost context. Given
that there are a fair number of benchmarks (of varying quality) around
canvas 2d speed I doubt vendors will be able to give up speed.
It's also important to note that unlike WebGL the only thing lost on a lost
context is the image buffer itself. With WebGL, the page has to regenerate
a large number of resources (shaders, buffers, textures) before it can
render the next frame. With canvas the page can just start drawing. Many
applications redraw the entire canvas on every frame so lost context
recovery is identical to normal operation - just draw the thing. All other
resources are managed and can be regenerated by the browser without script
On Mon, Sep 3, 2012 at 9:11 AM, Ian Hickson <ian at hixie.ch> wrote:
> There are ways to make it work without forgoing acceleration, e.g. taking
> regular "backups" of the canvas contents, remembering every instruction
> that was sent to the canvas, etc.
We investigated these and other options when first looking at GPU
acceleration in Chrome. None seemed feasible. Readbacks are expensive.
Bandwidth from GPU to main memory in split memory systems is limited and
doing a readback is a pipeline stall. Recording draw commands works for
some path-only use cases but many canvases reference from dynamic sources
such as <video>s or other <canvas>es. Preserving these resources around is
quite expensive, especially when they might be GPU-resident to start and
require a readback.
The more basic problem with all of these approaches is that they require
considerable complexity, time and memory to deal with a (hopefully) rare
situation. There will never be a benchmark that involves a context loss in
the middle, so any time spent on recovery is time wasted.
> On Mon, 3 Sep 2012, Benoit Jacob wrote:
> > Remember this adage from high-performance computing which applies here
> > as well: "The fast drives out the slow even if the fast is wrong".
This isn't an issue of the spec -- there is existing content that would be
It is the spec's problem so far as the spec wants to reflect reality. I
really doubt UAs are going to be able to implement something significantly
more complicated or slow than what they have been shipping for a few years.
I think it would be useful for some sorts of applications to be notified
when the image buffer data is lost so that they could regenerate it. This
would be useful for applications that use a canvas to cache mostly-static
intermediate data or applications that only repaint dirty rectangles in
More information about the whatwg