[whatwg] Hardware accelerated canvas
ian at hixie.ch
Mon Sep 3 09:11:37 PDT 2012
On Sun, 2 Sep 2012, Benoit Jacob wrote:
> > Realistically, there are too many pages that have 2D canvases that are
> > drawn to once and never updated for any solution other than "don't
> > lose the data" to be adopted. How exactly this is implemented is a
> > quality of implementation issue.
> With all the current graphics hardware, this means "don't use a GL/D3D
> surface to implement the 2d canvas drawing buffer storage", which
> implies: "don't hardware-accelerate 2d canvases".
There are ways to make it work without forgoing acceleration, e.g. taking
regular "backups" of the canvas contents, remembering every instruction
that was sent to the canvas, etc.
> Erik's proposal doesn't worsen the problem in anyway --- it acknowledges
> a problem that already exists and offers to Web content a way to recover
> from it.
The problem is that there is content that doesn't recover, and assumes the
problem doesn't exist. That makes it our problem.
On Mon, 3 Sep 2012, Benoit Jacob wrote:
> Remember this adage from high-performance computing which applies here
> as well: "The fast drives out the slow even if the fast is wrong".
> Browsers want to have good performance on Canvas games, demos and
> benchmarks. Users want good performance too. GL/D3D helps a lot there,
> at the cost of a rather rare -- and practically untestable -- problem
> with context loss. So browsers are going to use GL/D3D, period. On the
> desktop, most browsers already do. It seems impossible for the spec to
> require not using GL/D3D and get obeyed.
On Sun, 2 Sep 2012, Glenn Maynard wrote:
> If the choice becomes "follow the spec and don't hardware-accelerate
> canvas" vs. "don't follow the spec and get orders of magnitude better
> performance", I suspect I can guess the choice implementors will make
> (implementors invited to speak for themselves, of course).
This isn't an issue of the spec -- there is existing content that would be
On Mon, 3 Sep 2012, Erik Möller wrote:
> I don't particularly like this idea, but for the sake of having all the
> options on the table I'll mention it. We could default to the "old
> behaviour" and have an opt in for hardware accelerated canvas in which
> case you would have to respond to said context lost event. That would
> allow the existing content to keep working as it is without changes. It
> would be more work for vendors, but it's up to every vendor to decide
> how to best solve it, either by doing it in software or using the
> expensive read back alternative in hardware.
On Sun, 2 Sep 2012, Rik Cabanier wrote:
> If there was a callback for context loss and if the user had set it, a
> browser could throw the entire canvas out and ask for it to be
> re-rendered if the canvas is shown again. This would even make sense if
> you don't have a HW accelerated canvas.
> There would be no backward compatibility issue either. If the user
> doesn't set the callback, a browser would have to do something
> reasonable to keep the canvas bitmap around.
This is an interesting idea... do other vendors want to provide something
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
More information about the whatwg