[whatwg] Adding features needed for WebGL to ImageBitmap

Rik Cabanier cabanier at gmail.com
Sun Jul 14 16:26:32 PDT 2013


On Thu, Jul 11, 2013 at 1:24 PM, Kenneth Russell <kbr at google.com> wrote:

> On Thu, Jul 11, 2013 at 8:29 AM, Justin Novosad <junov at google.com> wrote:
> >
> >
> > On Wed, Jul 10, 2013 at 9:37 PM, Rik Cabanier <cabanier at gmail.com>
> wrote:
> >>
> >> On Wed, Jul 10, 2013 at 5:07 PM, Ian Hickson <ian at hixie.ch> wrote:
> >>
> >> > On Wed, 10 Jul 2013, Kenneth Russell wrote:
> >> > >
> >> > > ImageBitmap can cleanly address all of the desired use cases simply
> by
> >> > > adding an optional dictionary of options.
> >> >
> >> > I don't think that's true. The options only make sense for WebGL --
> >> > flipping which pixel is the first pixel, for example, doesn't do
> >> > anything
> >> > to 2D canvas, which works at a higher level.
> >> >
> >> > (The other two options don't make much sense to me even for GL. If you
> >> > don't want a color space, don't set one. If you don't want an alpha
> >> > channel, don't set one. You control the image, after all.)
> >> >
> >> >
> >> > > I suspect that in the future some options will be desired even for
> the
> >> > > 2D canvas use case, and having the dictionary already specified will
> >> > > make that easier. There is no need to invent a new primitive and
> means
> >> > > of loading it.
> >> >
> >> > If options make sense for 2D canvas, then having ImageBitmap options
> >> > would
> >> > make sense, sure.
> >> >
> >> >
> >> yeah, these options seem a bit puzzling.
> >> From the spec:
> >>
> >> An ImageBitmap object represents a bitmap image that can be painted to a
> >> canvas without undue latency.
> >>
> >> note: The exact judgement of what is undue latency of this is left up to
> >> the implementer, but in general if making use of the bitmap requires
> >> network I/O, or even local disk I/O, then the latency is probably undue;
> >> whereas if it only requires a blocking read from a GPU or system RAM,
> the
> >> latency is probably acceptable.
> >>
> >> It seems that people see the imageBitmap as something that doesn't just
> >> represent in-memory pixels but that those pixels are also preprocessed
> so
> >> they can be drawn quickly. The latter is not in the spec.
> >>
> >> I think authors will be very confused by these options. What would it
> mean
> >> to pass a non-premultiplied ImageBitmap to a canvas object? Would the
> >> browser have to add code to support it or is it illegal?
> >> Maybe it's easier to add an optional parameter to createImageBitmap to
> >> signal if the ImageBitmap is for WebGL or for Canvas and disallow a
> Canvas
> >> ImageBitmap in WebGL and vice versa.
> >
> >
> > You are implying a pretty heavy imposition as to what constitutes undue
> > latency.
> > I think the spec should stay away from forcing implementations to pin
> > decoded image buffers in RAM (or on the GPU), so that the browser may
> have
> > some latitude in preventing out of memory exceptions. In its current
> form,
> > the spec implies that it would be acceptable for an implementation to
> > discard the decoded buffer and only retain the resource in encoded form
> in
> > RAM.  Do we really need to make further optimizations explicit? For
> example,
> > an implementation could prepare the image data for use with WebGL the
> first
> > time it is drawn to WebGL, and keep it cached in that state. If the same
> > ImageBitmap is subsequently drawn to a 2D canvas, then it would use the
> > non-WebGLified copy, which may be cached, or may require re-decoding the
> > image. No big deal.
>
> The step of preparing the image for use, either with WebGL or 2D
> canvas, is expensive. Today, this step is necessarily done
> synchronously when an HTMLImageElement is uploaded to WebGL. The
> current ImageBitmap proposal would still require this synchronous
> step, so for WebGL at least, it provides no improvement over the
> current HTML5 APIs. A major goal of ImageBitmap was to allow Web
> Workers to load them, and even this ability currently provides no
> advantage over HTMLImageElement.
>
> > Fundamental question: Do we really need the caller to be able to specify
> > what treatments need to be applied to prepare an image for WebGL, or is
> it
> > always possible to figure that out automatically?
>
> It is never possible to figure out automatically how the image needs
> to be treated when preparing it for use with WebGL. I'm not sure where
> that idea came from.


Gregg's email says that WebGL almost always has the opposite options of
Canvas.
I was thinking that maybe it's acceptable to just make it a switch between
Canvas 2D and WebGL.

On the contrary, there are eight possibilities
> (2^3), and different applications require different combinations.
>
>



More information about the whatwg mailing list