[whatwg] Canvas-related feedback

Ian Hickson ian at hixie.ch
Mon Dec 17 15:43:44 PST 2012


On Wed, 12 Sep 2012, Michael Day wrote:
> > 
> > Yeah, that's why the spec hand-waves to transform the line too... but 
> > I agree that that doesn't really work.
> > 
> > Do you have any suggestion of how to spec this better?
> 
> This is the most general arcTo situation:
> 
>     setTransform(M0)
>     lineTo(x0, y0)
>     setTransform(M)
>     arcTo(x1, y1, x2, y2, radius, ...)
> 
> To generate the arc we need three points: P0, P1, P2, all in the same
> coordinate system. The three points are:
> 
> P0 = inverse(M) * M0 * (x0, y0)
> P1 = (x1, y1)
> P2 = (x2, y2)
> 
> We are transforming (x0, y0) by M0, which is the transform current at the time
> the point was added to the path. This gives us a point in canvas coordinates
> that we can transform by the inverse of M, which is the transform current at
> the time the arc is added to the path. This gives us a point in the same
> coordinate space as P1 and P2.
> 
> In the common case where M = M0, the transforms cancel each other out and P0 =
> (x0, y0).
> 
> Once we have the three points in the same coordinate space we can generate the
> arc and then apply M to all of the points in the generated arc to draw the arc
> in canvas coordinates.
> 
> Does this make sense?
> 
> I don't think it is possible to specify this process without requiring 
> an inverse transformation somewhere, to get all three points into the 
> same coordinate space. If so, it is probably best to describe this 
> explicitly, rather than ambiguously implying the need for it.

I think it makes sense, but I'd really rather not get into the nitty 
gritty of the maths if it can at all be avoided. I've tried to do as you 
describe, though.


On Thu, 20 Sep 2012, Dirk Schulze wrote:
> On Sep 21, 2012, at 3:12 AM, Ian Hickson <ian at hixie.ch> wrote:
> >> 
> >> The only situation that might be reasonable would be a transform on 
> >> the Canvas that an author want to cover in the Path. But for the rare 
> >> case where this matters, you can create a new Path object, add your 
> >> path with the transform and call isPointInPath.
> > 
> > Yeah, you could do that too.
> > 
> >> Furthermore, a transform() function that applies to a Path object 
> >> seems to be useable as well.
> > 
> > You can create a new Path, then add another Path to it while applying 
> > a transform, using the addPath() method.
> 
> Yes, it is possible. But there are two reasons why I think that it still 
> makes sense to use a transform function. First it seems natural to have 
> a transform on Path object, like the CanvasRenderingContext2D already 
> has. Both share a lot of functions, why disallow it for transforms?

The main reason I didn't add it to Path is because it led to a bit of 
confusion in terms of what the transform applied to. Does it apply when 
you add points _to_ the path? Does it apply when you draw the path on 
another path or the canvas? Also, one of the more confusing aspects of the 
canvas API is that you can change the coordinate space while adding lines 
to the path, and we had a whole era wherein implemenations and the spec 
were confused as to what exactly happened when (did the points get 
transformed? Did the transform apply only when you stroke? etc).

So in the new API I side-stepped the whole problem.


> Second, the solution that you mention requires a copy operation. A lot 
> of libraries would create a new path, add the other path and apply the 
> transform afterwards. Seems unnecessary for me.

I don't really follow. Can you elaborate?


On Fri, 21 Sep 2012, Rik Cabanier wrote:
> On Fri, Sep 21, 2012 at 2:12 AM, Ian Hickson <ian at hixie.ch> wrote:
> > On Thu, 20 Sep 2012, Dirk Schulze wrote:
> > >
> > > The interface of CanvasRenderingContext2D currently has a function 
> > > called isPointInPath() with a Path object as input [1]. I wonder why 
> > > this needs to be on the context interface. Wouldn't it make more 
> > > sense on the interface of Path object itself? If an implementation 
> > > really needs a context to get a point on a path it, it can create it 
> > > on it's own.
> >
> > I don't think it would make _more_ sense, but I agree that it would 
> > make equal amounts of sense.
> >
> > In practice you're pretty much always going to have a context around 
> > when you want to check this, because the reason you'd use it is to see 
> > where I mouse click landed on a canvas. And you're going to want the 
> > Path object transformed as per the transform on the canvas, generally 
> > speaking.
> 
> Not necessarily. The path object makes sense outside of canvas as well. 
> You don't need a context to create it so we are thinking of integrating 
> it with SVG. It would increase interop and simplfy the API's if you 
> could ask SVG elements for their path, or create them with one.

Sure, but with SVG you don't need to know if a point is in a path, because 
the events get dispatched to the right path by the UA.


> Maybe if there was a 'currentpath' property on the 2d context, you can 
> move 'isPointInPath' to the path object. This would let you get rid of 
> the extra 'isPointInPath' that takes a path too.
> 
> so:
>   myContext.currentpath.isPointInPath(...)

I don't really see what problem this solves.


> Most of the time you don't want to know if a point falls in the current 
> path in the canvas. You want to know after you finish drawing and the 
> user clicks on the canvas if he hit a region. By then the path in the 
> canvas is long gone (unless you want to go through the trouble of 
> redrawing everything).

Sure, and for that we have the hit region API.


> I agree with Ian that there shouldn't be a method that changes the current
> points in a path.
> However, if there was a 'transform' function that took a matrix and
> returned a transformed path, it would do what Dirk wants.
> In addition, the path API's could be simplified since you can take out all
> the 'transform' arguments.

Can you elaborate on the use case for this? Why would you transform a path 
without immediately using it somewhere?


On Sat, 22 Sep 2012, Dirk Schulze wrote:
> 
> Would it be possible to extend CanvasRenderingContext2D with the functions:
> 
> void addPath(Path path); - which adds a Path object to the current path on Canvas?
> attribute Path currentPath; - that returns a copy of the current path, or let you replace the current path with another Path object (not live)?

It is definitely possible, but before we add anything, we must know what 
the use cases are so that we can evaluate if this is the best solution.


On Fri, 28 Sep 2012, Dirk Schulze wrote:
> 
> currentPath would not be live, no. But if you really want op(Path), then 
> it raises the question why we have path arithmetic in 
> CanvasRenderingContext2D at all. Then it should be completely separated 
> (which is not the case). What would be the sense for op(Path), if you 
> have currentPath attribute?

CanvasRenderingContext2D has a path-related API because that's what has 
shipped and we can't break back compat.


On Sat, 29 Sep 2012, Rik Cabanier wrote:

> Currentpath could still be handy if you want to copy a path from one 
> canvas to another, or if you have existing code that you are migrating. 
> For instance, if you're going to use hit regions, it would be handy to 
> have.

Why not just draw the path to a Path?

I'd rather not add API surface just for transitory needs.


Dirk Schulze wrote:
> On Sep 24, 2012, at 4:32 PM, Ian Hickson <ian at hixie.ch> wrote:
> > On Mon, 24 Sep 2012, Dirk Schulze wrote:
> >> 
> >> Making the path syntax more complex than it needs to be seems not to 
> >> be an option for me.
> > 
> > It's definitely an option, assuming this is not a trivial statement, 
> > because it's no the only axis along which the syntax can be optimised, 
> > and it is not the axis that has been used previously. Specifically, 
> > the path syntax has clearly been optimised for terseness and power, at 
> > the expense of simplicity.
>
> And this simplicity is asked from developers over the power of the 
> existing segments. It is already strange that not even authoring tools 
> use a/A, neither Illustrator nor Inkscape, the most famous SVG creation 
> tools.

It's always possible for constraints to change; I was just disagreeing 
with your statement above regarding what was an option and what was not.


> > Nothing can quite mess up an API or language like changing 
> > optimisation function (changing design philosophy) half-way through 
> > its life. You end up with languages that feel like they have multiple 
> > personality disorder, and it ends up being much harder to learn the 
> > language than if it was consistent with itself but overall more 
> > complicated than possible.
>
> Sometimes this is the price of backwards compatibility.

Yes, but it's a _very_ high price, rarely worth the cost. In particular, 
it _doesn't_ result in a simpler language, so if simplicity is the goal, 
then this is not the way to do it.


> > That is, a language that has complexity 10 throughout is easier to 
> > learn and use than a language that is half complexity 10 and half 
> > complexity 1. This is a lesson we have learnt many times on the Web, 
> > not least of which with HTML, which has lurched in many directions 
> > over its lifetime, leaving authors highly frustrated and confused.
>
> Well, we just add new segments and don't mess up the existing syntax. 
> Authors have the freedom to choose between the path syntax they want.

"Authors have the freedom to choose between the path syntax they want" is 
exactly the mistake I'm talking about. Adding more flexibility doesn't 
make things simpler, it makes things more complex. For example, authors 
have to learn everything, because they might have to maintain stuff 
written by someone who used the part of the API you'd rather were ignored.


> >> To be honest it seems very confusing to me that Canvas has arc() and 
> >> ellipse() but no ellipseTo() as pendant to ellipse().
> > 
> > ellipse() exists only because arc() already had optional arguments and 
> > the resulting API had arc() been extended to support ellipse()'s use 
> > case would have been to have the radii arguments split or to have 
> > optional arguments in the middle, which is IMHO even worse than adding 
> > another method. It's an unfortunate situation, certainly.
>
> I believe you that this wash't your intention initially, but the 
> inconsistency is the naming now. Canvas has arc and ellipse on the one 
> side and just arcTo on the other side.

Yes, as I said, it's an unfortunate situation.


> > On Mon, 24 Sep 2012, Tab Atkins Jr. wrote:
> >> 
> >> Returning to the original subject, we don't *want* optional arguments 
> >> here.
> > 
> > Well, the canvas API has optional arguments, so there's no way to be 
> > consistent with canvas with this constraint. (This is the case even 
> > before ellipses are considered.)
>
> Yes, there have been optional arguments before. The question is just 
> related to arcTo() and ellipseTo() they don't need to differ so much 
> from the idea of arc() and ellipse()

If the goal is no optional arguments, this doesn't achieve your goal. So I 
don't follow.


> >> We would prefer to align the syntaxes of canvas path and SVG path as 
> >> much as possible, to help authors translate knowledge between the 
> >> two.
> > 
> > I think it would be far more useful for SVG to be consistent with 
> > itself, have no similarity with canvas, and present a sane language, 
> > than to have SVG inconsistent with both itself and canvas, and present 
> > a fractured language that is hard to learn.
>
> Not if we have the feedback of authors that the current syntax is to 
> complex for a lot of cases. It doesn't make sense to act against wishes 
> of web authors.

Oh it _absolutely_ makes sense to "act against wishes of web authors" in 
some situations. Authors often do not realise what they want from 
languages, much like users don't realise what they want from software, or 
people who live in houses don't realise what they want from architecture, 
or people who wat don't realise what they want from food.

It's the job of the language designer / interaction designer / architect / 
cook to provide the people with what they actually want even when they 
don't realise that they want it.

(Nothing illustrates this better than usability studies, where users will 
ask for the most inane of features and yet, when presented with well- 
designed features that address their underlying use cases, respond very 
positively and indicate that that is exactly what they wanted all along.)


> >> As such, we're requesting that the canvas path API change to be 
> >> consistent with itself, in the direction that we prefer.
> > 
> > I believe the canvas API is adequately consistent with itself given 
> > the constraints facing this API's evolution, and so have not changed 
> > it.
>
> I don't see the changes on SVG as a requirement for Canvas to change the 
> API. I am just in favor for a harmonization. Canvas and SVG don't need 
> to be two fully separate drawing languages. There is a lot of interest 
> from web authors to use both.

I think the two have started with very different design philosophies and 
attempting to merge them is very misguided.


On Tue, 25 Sep 2012, Rik Cabanier wrote:
> > >
> > > I'm working on a spec to add blending and compositing through simple 
> > > CSS keywords. It is trying to define a generic model that is not 
> > > specific to Canvas, HTML or SVG and then lists how the model could 
> > > be implemented. We've gotten some comments that this feature would 
> > > be useful in Canvas as well so I was wondering if it made sense to 
> > > add it to the canvas API.
> >
> > Is there any chance of adding filters to this also?
> 
> CSS Filters are already defined here: 
> https://dvcs.w3.org/hg/FXTF/raw-file/tip/filters/index.html. The spec 
> refer to each other for some concepts but are pretty separate. 
> (Filtering generally just works on 1 image while blending and 
> compositing describes how 2 images are combined)
> 
> One possibility would be to add a globalFilterOperation property that 
> takes the arguments of the 'filter' property [1]

I haven't added this yet, but if this is something that UAs want to 
implement, I'm happy to do so.


> > https://dvcs.w3.org/hg/FXTF/rawfile/tip/compositing/index.html
> >
> > Would it make sense to have the canvas section defer to this spec for 
> > all blending, filtering, and compositing?
> 
> I think it does since then everyting will be in 1 location. I would need 
> to update the blending spec for canvas because it behaves differently 
> than HTML and SVG.

Ok, let me know when I should do this.


On Mon, 3 Dec 2012, Gregg Tavares (社ç~T¨) wrote:
> >
> > The main reason 0-sized canvases have always thrown in drawImage() is 
> > that I couldn't work out what you would paint, nor why you'd have a 
> > zero-sized canvas, and throwing seemed like it'd be the best way to 
> > help the author figure out where the problem was, rather than just 
> > ignoring the call and having the author scratch their head about why 
> > nothing was happening.
> >
> > If there's cases where you would legitimately end up with zero-sized 
> > canvases that you'd try to draw from, though, I'm happy to change it.
> 
> I don't see how zero sized canvases are any different than zero sized 
> arrays or empty strings.

They're not. In each case, if there are legitimate reasons that you could 
end up with them, I make it do nothing, whereas if there isn't, I make it 
throw.


> It's not a matter of use case. It's a matter of not having to write 
> checks everywhere for 0.

If you have to write a check for zero, that assumes it can legitimately 
happen, in which case I agree that it shouldn't throw.


> If I'm writing some app that takes a user supplied size (say a photo 
> editing app where the user can select a rectangle and copy and paste), 
> why do I want to have to check for zero?

Why would the user want to copy a zero-sized rectangle? Surely it'd be 
better to help the user not do that.


> Maybe I'm animating
> 
>    function draw() {
>       var scale = Math.sin() * 0.5 + 0.5;
>       var width = realWidth * scale;
>       var height = realHeight * scale;
> 
>       // do something with width, height
>    }

That's a reasonable use case. I've made the image stuff do nothing when 
passed a zero-sized canvas.


> That includes ImageBitmap, Canvas and ImageData

I've updated drawImage() -- it just aborts silently if the source 
dimensions are zero (in particular, it doesn't paint anything, so no 
shadows end up visible, and the composition operator doesn't blow away the 
canvas as it would for a very small non-zero rectangle -- I couldn't work 
out exactly what should happen if we did actually try to render 
somethign). I haven't changed ImageBitmap or ImageData, not sure what to 
do with those. If you could elaborate on what cases end up with zeroes in 
those cases, I can figure out what it would make sense to do.


On Mon, 10 Dec 2012, Kevin Gadd wrote:
> On Mon, Dec 10, 2012 at 10:24 AM, Ian Hickson <ian at hixie.ch> wrote:
> > There's two ways to do scaled sprites with drawImage(): have a border 
> > of transparent black around each sprite, or copy the data out of the 
> > sprite sheet and into a temporary canvas at its original size, then 
> > scaling from that.
> 
> How big does the border of transparent black have to be? Maybe I'm 
> reading the spec incorrectly, but given the way it's written, 
> implementations would be free to make use of hardware mip-maps when 
> rendering images, which would mean that when scaling down, values from 
> arbitrarily far outside the source rectangle could get pulled in.

Theoretically the filtering algorithm is undefined, so yes, in theory it 
could be that even a border doesn't do anything. In practice, a one pixel 
border is presumably enough; I can't really imagine a case where it would 
be prettier to draw a scaled image (even without cropping) where the 
colours of a pixel are influenced by pixels on the other side of a 
transparent row or column. I could be wrong though.

 
> Temporary canvases for every blit seems like it would imply a 
> significant performance penalty, as well, but I haven't tested that 
> technique - maybe it's okay. I do know that creating a large number of 
> temporary canvases causes performance issues in IE.

It's certainly suboptimal, I agree. These are just the options that one 
has to do now in the absence of a better solution.


> > > https://dl.dropbox.com/u/1643240/canvas_artifacts.html
> > Disabling image smoothing will increase artefacts, that's kind of the 
> > point. :-) Having said that, I don't really see what that test case is 
> > demonstrating. Can you elaborate?
> 
> If the test case is demonstrating the behavior I argue correct, the 
> drawn images in the canvas at the bottom will not show any red pixels. 
> Another arguably correct approach would be for red pixels only to appear 
> with image smoothing enabled. It doesn't make sense even given the 
> current spec for red pixels to appear when smoothing is disabled - if 
> you're drawing with nearest neighbor sampling, the red pixels that 
> appear in the current test case (in Firefox on Windows, at least) can 
> only be there if partial pixels are being rendered, which shouldn't be 
> possible using nearest neighbor.
>
> An acceptable compromise in this case, IMO, would be to at least require 
> that pixels from outside the source rectangle are not read if image 
> smoothing is disabled.

I haven't changed the spec here. I don't really want to specify the 
filtering algorithms beyond what we've already done; this is all supposed 
to be a quality-of-implementation issue.


> > The reason to prefer the current behaviour is if you want to just 
> > update a small part of an image. For example, if you draw a bit photo, 
> > then draw text over it, then want to remove the text by just drawing 
> > the photo over where the text was but not redrawing the whole thing. 
> > If we clamped to source rectangle, we'd get artefacts in this case 
> > that couldn't be worked around (unlike the problems with scaling 
> > sprites, which can be worked around, albeit in a suboptimal fashion).
> 
> Using a clip seems like the right way to do that.

It's a lot more expensive.


On Mon, 10 Dec 2012, Justin Novosad wrote:
> 
> Couldn't we just make everyone happy by making the behavior controllable 
> through a context attribute or an additional overload of drawImage that 
> takes an extra argument?

I'm sure we will in due course. Probably not an extra optional argument 
though, since it's unclear where you'd put it (requiring all 9 other 
arguments each time seems excessive).


On Wed, 12 Dec 2012, Justin Novosad wrote:
> 
> IMHO: Undifined behavior is a spec bug. If we have a problem with the 
> spec, we fix the spec, we don't just each do our own thing.

Hear hear.


On Sun, 16 Dec 2012, Rik Cabanier wrote:
> 
> It seems a bit too expensive to add a variable to the graphics context that
> is checked for just this call.

Why would it be expensive?


On Mon, 17 Dec 2012, Justin Novosad wrote:
> 
> Yes. That sounds quite reasonable to me, but we can find a better name. The
> name "drawNonSmoothedImage" suggests that the image won't be smoothed at
> all, which is not the case.  It's hard to find a name that correctly
> describes the right behavior without getting too technical.  I am thinking
> "drawSubImage", in the sense that the sub region delimited by the source
> rectangle is treated as if it were a whole image.

If we did a method, I'd probably go for drawSprite().


> This gives me another idea: we could just have a new Image constructor 
> that creates a new image element that is a subregion of another:
> var mySprite = new Image(spriteMap, x, y, w, h);
> This can be implemented in a lightweight way that just references the data
> of the source image.

That's an interesting option, indeed.

I've done this, using ImageBitmap.


On Mon, 17 Dec 2012, Kevin Gadd wrote:
> 
> Will it be possible to accept a canvas as the first argument instead of 
> an Image?

createImageBitmap accepts the following:

   <img>
   <video>
   <canvas>
   Blob
   ImageData
   CanvasRenderingContext2D
   ImageBitmap


> If you have a few live references to subregions of a larger image, will 
> that prevent browsers like Chrome from discarding the decoded pixels? 
> The fact that Chrome discards unused decoded image pixels helps keep 
> memory usage low for HTML5 games, so it would suck if this API change 
> prevented it (and other browsers) from doing that effectively.

This is entirely a UA optimisation / quality of implementation decision.


> Will creating the subregion imply a copy and the associated garbage 
> collection overhead?

Also a UA issue.


> I think what you'd want here is for it to be a reference to the 
> subregion within the existing image, which means if it's a reference to 
> a subregion of a canvas, when the canvas changes the subregion changes 
> too.

Hm, interesting. That's not what I specced. What would the use case be?

ImageBitmap is intended to be immutable once created, so that you can 
safely pass it around across workers.


> How would a new overload of the Image constructor be feature-detected in JS?

I used createImageBitmap(), so hopefully all implementations will do a 
full implementation of this feature all at once, so it's either there or 
not.


> If this becomes the correct way to solve this problem, what happens to 
> existing implementations that provided alternative sampling behavior 
> (like Chrome)? Will they get changed to match the spec, breaking apps 
> until the new Image constructor is rolled out?

Hopefully...


> Also, the more I think about it, the more the garbage collection impact 
> of invoking a constructor for every blit seems like a potential problem.

You only need to call the constructors when you create your sprites, not 
continually.


> Even in generational GCs like V8, allocating a bunch of objects isn't 
> cheap. Given that JS provides no way to hold weak references, it 
> wouldn't be straightforward to cache and evict Image objects for each 
> particular source rectangle used when drawing.

Could you elaborate on this?


On Mon, 10 Dec 2012, Rik Cabanier wrote:
> On Mon, Dec 10, 2012 at 8:45 PM, Ian Hickson <ian at hixie.ch> wrote:
> > On Mon, 10 Dec 2012, Rik Cabanier wrote:
> > >
> > > yes, however it will be slower since the pattern has to be rendered 
> > > two or four times. If you can reflect in x and y, you can calculate 
> > > the pattern cell once and then have your hardware do the tiling.
> >
> > If it's something that happens a lot, then certainly it makes sense to 
> > add it. But I've heard very few requests for this.
> 
> yeah, we (= Adobe) never implemented it natively in our graphics 
> libraries but other frameworks (such as xps from Microsoft) did.

It's more a matter of who wants to use it than who implemented it, but 
thanks, that's useful information also.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


More information about the whatwg mailing list