[whatwg] Adding SVG Filter-like functionality to Canvas 2D Context
hansschmucker at gmail.com
hansschmucker at gmail.com
Mon Jul 6 07:09:02 PDT 2009
I've recently done some experiments using SVG filters (see
http://www.tapper-ware.net/stable/web.filter.voxels/index.xhtml ). SVG
Filters basically offer greater speed for users and easier optimization for
implementing parties than trying to implement standard image manipulation
easily be implemented via graphics accelerators, which results in far
greater speed, which is especially important for mobile devices.
A large share of the typical operations performed by Canvas developers can
be expressed easier via an SVG Filter like interface. Like contrast,
brightness, blur and so on. Right now, such operations can sometimes be
done in a hackish way, by combining the target image with a solid
black/white picture and then masking it, or using getImageData, which is
Mozilla also includes drawWindow, which allows a filtered element to be
imported back into the Canvas, but there are dozens of security issues if
you try to do that on the web (which is why it's only allowed for chrome).
Aside from drawWindow, we are currently unable to build a processing chain
that does include filters, other than applying a filter to the result
rendered by the canvas via foreignObject.
SVG Filters are a relatively easy spec, where the most important parts can
be implemented in a matter of hours. Also, since all browsers that
currently support Canvas also support SVG, the actual functionality is
already included in the source, so very little new code would have to be
I'm willing to write a spec for it if there's any interest, but for now,
I'll just give you an example of how this could work.
The 2D Context gets a new method "createFilterChain()", that if invoked
returns a Canvas2DContextFilterChain object.
A Canvas2DContextFilterChain is applied to any drawing much like
globalAlpha via a globalFilterChain property.
A Canvas2DContextFilterChain is bound to a specific Canvas and can not be
applied to anything else.
The affected region is by default the rectangle of the operation.
Canvas2DContextFilterChain instances have methods to add
Canvas2DContextFilterNode elements, one for each type.
Note that there is no feImage equivalent to keep security complexities to a
Data is passed into a filter chain as BackgroundImage/Alpha (containing the
image data in the target rectangle) and SourceImage/SourceAlpha for the
data that is supposed to be drawn.
Each Canvas2DContextFilterNode operates in the RGBA32 space, no other color
space is supported.
By default, each Canvas2DContextFilterNode is applied to a 0%,0% 100%,100%
rectangle (the -10%,-10% rule of SVG is not applied)
For each Node, the coordinate system can be set to be Canvas relative or
affected region relative.
Units are either numbers for absolute offsets from the chosen rectangle or
percentages of the same.
Instead of the string based buffer system of SVG, each
Canvas2DContextFilterNode has connectOutputImage/Alpha methods.
Am I the only one seeing any benefit for this or does anybody else think
that would be hope for such a proposal?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the whatwg