[whatwg] Hardware accelerated canvas
emoller at opera.com
Mon Sep 3 01:53:23 PDT 2012
On Mon, 03 Sep 2012 00:14:49 +0200, Benoit Jacob <bjacob at mozilla.com>
> ----- Original Message -----
>> On Sun, 2 Sep 2012, Erik Möller wrote:
>> > As we hardware accelerate the rendering of , not just with
>> > the webgl
>> > context, we have to figure out how to best handle the fact that
>> > GPUs loose the
>> > rendering context for various reasons. Reasons for loosing the
>> > context differ
>> > from platform to platform but ranges from going into power-save
>> > mode, to
>> > internal driver errors and the famous long running shader
>> > protection.
>> > A lost context means all resources uploaded to the GPU will be gone
>> > and have
>> > to be recreated. For canvas it is not impossible, though IMO
>> > prohibitively
>> > expensive to try to automatically restore a lost context and
>> > guarantee the
>> > same behaviour as in software.
>> > The two options I can think of would be to:
>> > a) read back the framebuffer after each draw call.
>> > b) read back the framebuffer before the first draw call of a
>> > "frame" and build
>> > a display list of all other draw operations.
>> > Neither seem like a particularly good option if we're looking to
>> > actually
>> > improve on canvas performance. Especially on mobile where read-back
>> > performance is very poor.
>> > The WebGL solution is to fire an event and let the
>> > js-implementation deal with
>> > recovering after a lost context
>> > http://www.khronos.org/registry/webgl/specs/latest/#5.15.2
>> > My preferred option would be to make a generic context lost event
>> > for canvas,
>> > but I'm interested to hear what people have to say about this.
>> Realistically, there are too many pages that have 2D canvases that
>> drawn to once and never updated for any solution other than "don't
>> the data" to be adopted. How exactly this is implemented is a quality
>> implementation issue.
> With all the current graphics hardware, this means "don't use a GL/D3D
> surface to implement the 2d canvas drawing buffer storage", which
> implies: "don't hardware-accelerate 2d canvases".
> If we agree that 2d canvas acceleration is worth it despite the
> possibility of context loss, then Erik's proposal is really the only
> thing to do, as far as current hardware is concerned.
> Erik's proposal doesn't worsen the problem in anyway --- it acknowledges
> a problem that already exists and offers to Web content a way to recover
> from it.
> Hardware-accelerated 2d contexts are no different from
> hardware-accelerated WebGL contexts, and WebGL's solution has been
> debated at length already and is known to be the only thing to do on
> current hardware. Notice that similar solutions preexist in the system
> APIs underlying any hardware-accelerated canvas context: Direct3D's lost
> devices, EGL's lost contexts, OpenGL's ARB_robustness context loss
>> Ian Hickson U+1047E )\._.,--....,'``.
>> http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._
>> Things that are impossible just take longer.
I agree with Benoit, this is already an existing problem, I'm just
pointing the spotlight at it. If we want to take advantage of hardware
acceleration on canvas this is an issue we will have to deal with.
I don't particularly like this idea, but for the sake of having all the
options on the table I'll mention it. We could default to the "old
behaviour" and have an opt in for hardware accelerated canvas in which
case you would have to respond to said context lost event. That would
allow the existing content to keep working as it is without changes. It
would be more work for vendors, but it's up to every vendor to decide how
to best solve it, either by doing it in software or using the expensive
read back alternative in hardware.
Like I said, not my favourite option, but I agree it's bad to break the
Core Gfx Lead
More information about the whatwg