[whatwg] Enabling LCD Text and antialiasing in canvas

Adam Barth w3c at adambarth.com
Sat Nov 24 19:10:29 PST 2012

On Fri, Nov 23, 2012 at 3:04 PM, Ian Hickson <ian at hixie.ch> wrote:
> On Mon, 12 Nov 2012, Justin Novosad wrote:
>> For many types of apps, DOM-based rendering is uncompetitively slow
>> [so we should make text rendering in canvas more controllable]
> This seems like something we should fix, not something we should work
> around by having people use canvas instead. Using canvas has all kinds of
> terrible side-effects, like reducing the likely accessibility of the page,
> making searcheability much worse, etc.
> Also, do you have any metrics showing the magnitude of this problem on
> real-world sites that might consider using canvas instead?

The metrics I've seen show that the magnitude of this problem is
approximately 8x (to the extent that it's sensible to represent the
magnitude with a number).

As far as I can tell, the issue really boils down to the DOM being
retained mode and canvas being immediate mode.  As the size of the
underlying model grows, uploading the entire model into the DOM
becomes increasingly uncompetitive with having the application manage
the model and drawing using immediate mode because the application can
use application-specific knowledge to avoid having to process large
portions of the model.

I'm unsure what details I'm able to share about these experiments.
Justin might know better what we're able to share, but the outcomes
are roughly:

1) Uploading the entire document model into DOM (say for a model that
requires 1 million elements to represent) causes the application to
become completely unusable.  Memory consumption goes off the charts,
hit testing noticeably lags, etc.  We've been working to improve
performance here, but there are limits to what we'll be able to
achieve.  For example, DOM is always going to be a less memory
efficient than an application-specific representation.  On some
rendering benchmarks, a variety of browsers are able to render these
models at about 8 fps.

2) Remarkably, the current best candidate is a rendering pipeline that
attempts to use the DOM in immediate mode.  The application performs
some application-specific processing to determine which portions of
the model can actually affect what's drawn on screen, and then the
application uses innerHTML to create DOM for that portion of the
model.  (They've experimented with a bunch of choices and innerHTML
appears to be the fastest way to use the DOM as an immediate mode
API.)  Using this pipeline, the application uses reasonable amounts of
memory and hit testing, etc, aren't impacted.  This pipeline gets
about 20 fps.

3) Once they've moved from (1) to (2), you can understand why the next
logical step is to use a real immediate mode API, like canvas or
WebGL.  Approach (2) is totally nutty: there's no way the right design
is to build up a giant string of markup and then run it through the
parser for every frame.  Using canvas, the application has no trouble
achieving 60 fps.

I think the real question here is how do we want applications with
very large models to render?  Do we really want them to upload their
entire models into DOM?  If not, how can we provide them with a
high-quality immediate model rendering pipeline.


More information about the whatwg mailing list