[whatwg] <video> feedback
Ian Hickson
ian at hixie.ch
Tue Feb 9 18:03:35 PST 2010
On Wed, 28 Oct 2009, Kit Grose wrote:
>
> I've been working on my first HTML5 frontend, which is using the video
> element, and I've run into a part of the spec that I disagree with (and
> would like to understand its justification):
>
> > Content may be provided inside the video element. User agents should
> > not show this content to the user; it is intended for older Web
> > browsers which do not support video, so that legacy video plugins can
> > be tried, or to show text to the users of these older browsers
> > informing them of how to access the video contents.
>
> As a content producer, I have no desire to double-encode all our
> existing videos (from their current H.264 format into OGG), especially
> we really only see around 58% Firefox marketshare on this site. I'm
> using Kroc Camen's Video For Everybody
> (http://camendesign.com/code/video_for_everybody ), so UAs which don't
> support the video element get a QuickTime object first, and a Flash 9+
> player (capable of playing H.264 video) next, and lastly a video
> download link.
>
> I expected (incorrectly, in this case) that if I only produced one
> source element (an MP4), Firefox would drop down to use the fallback
> content, as it does if I include an object element for a format not
> supported (for example, if I include a QuickTime object and QT is not
> installed, the user sees fallback content). As far as I can see, the
> only option in this situation is to rely on Javascript and the video
> element's canPlayType() function.
>
> Can I get some sort of an understanding on why this behaviour (non-
> descript error in supported UAs rather than using the fallback content
> that can provide alternate access methods) would be preferred?
The idea is that there will be a common codec that all browsers support,
so that this is not an issue. Getting such a codec is an ongoing effort.
On Wed, 28 Oct 2009, Kit Grose wrote:
>
> Thanks for the explanation. While I understand the issue you present
> with precedence of JS and fallback content, I can't off the top of my
> head come up with any necessary uses for the canPlayType function (maybe
> as a nice-to-have, of course) had the behaviour worked more
> predictablyparticularly if the tradeoff is a totally non-workable
> solution in modern browsers with NoScript turned on in situations like
> mine. What happens if/when IE comes to the party but requires WMV
> output? Will we all then encode *three* of the same video just to get
> broader support? I can't see the complexity of that operation ever
> trumping the ease of use (from a content producer's end) of a single FLV
> with a Flash video player, which is surely the ultimate goal here.
Indeed, if we can't get a common codec, the spec as written today is not
a particularly good design. If we really can't solve this problem, then
we'll have to introduce a declarative way of saying "if you can't play any
of the videos, here's what I want you to do instead" -- but hopefully we
won't have to go there.
On Thu, 3 Dec 2009, Kit Grose wrote:
>
> Sorry to resurrect an old thread but I was using my iPhone and had an
> extra couple of questions about this I was hoping people might be able
> to answer for me.
>
> The iPhone (and other similar devices) are restricted to certain file
> formats and even bitrates/image sizes. When the iPhone encounters our
> <video> element, I can supply a non-compatible video (still in an MP4
> container) and the iPhone knows to mark the video in place as
> non-playable. If I whack in a compatible H.264 video, the video is shown
> as playable.
>
> Can someone explain to me how this works, given Aryeh's response above?
> Surely if the iPhone can determine its capacity to be able to play a
> video file, other UAs could do likewise and fall back on the content
> accordingly as UAs with zero <video> support do?
On Thu, 3 Dec 2009, Philip Jägenstedt wrote:
>
> I know nothing about the iPhone, but any UA can know if it can play a
> resource or not simply by trying and adjusting the UI as appropriate.
> One *could* use the same hooks to display fallback content in those
> cases, but it is a very bad idea. Apart from the things Aryeh mention,
> because of how the resource selection algorithm works, you can never
> know if there will be a playable resource later, so there's no point
> where it's appropriate to show the fallback content. The only remaining
> option is flip-flopping between replaced content (video) and fallback
> content, which don't want (especially considering that the fallback
> content is likely to contain <object> for Flash or some other legacy
> fallback).
On Thu, 3 Dec 2009, Kornel LesiÅ~Dski wrote:
>
> How about making end of selection algorithm explicit?
>
> Something like video.imDoneWithSourcesEitherPlayOrShowFallback() method,
> which upon failure permanently locks <video> in fallback state (to avoid
> flip-flopping).
Well if you're using script, you can just do whatever behaviour you want
from the onerror handler of the last <source>.
> or a special source that, if selected, triggers fallback:
>
> <video>
> <source src=file>
> <source fallback> (or <source src="#fallback">?)
> </video>
Something like this will probably have to be used if we can't get a common
codec, indeed.
On Sat, 31 Oct 2009, Brian Campbell wrote:
>
> As a multimedia developer, I am wondering about the purpose of the timeupdate
> event on media elements.
It's primary use is keeping the UIs updated (specifically the timers and
the scrubber bars).
> On first glance, it would appear that this event would be useful for
> synchronizing animations, bullets, captions, UI, and the like.
Synchronising accompanying slides and animations won't work that well with
an event, since you can't guarantee the timing of the event or anything
like that. For anything where we want reliable synchronisation of multiple
media, I think we need a more serious solution -- either something like
SMIL, or the SMIL subset found in SVG, or some other solution.
> At 4 timeupdate events per second, it isn't all that useful. I can
> replace it with setInterval, at whatever rate I want, query the time,
> and get the synchronization I need, but that makes the timeupdate event
> seem to be redundant.
The important thing with timeupdate is that it also fires whenever the
time changes in a significant way, e.g. immediately after a seek, or when
reaching the end of the resource, etc. Also, the user agent can start
lowering the rate in the face of high CPU load, which makes it more
user-friendly than setInterval().
On Thu, 5 Nov 2009, Brian Campbell wrote:
> >
> > Would something like <video> firing events for every frame rendered
> > help you out? This would help also fix the <canvas> over/under
> > painting issue and improve synchronization.
>
> Yes, this would be considerably better than what is currently specced.
There surely is a better solution than copying data from the <video>
element to a <canvas> on every frame for whatever the problem that that
solves is. What is the actual use case where you'd do that?
On Thu, 5 Nov 2009, Andrew Scherkus wrote:
>
> I'll see if we can do something for WebKit based browsers, because today
> it literally is hardcoded to 250ms for all ports.
> http://trac.webkit.org/browser/trunk/WebCore/html/HTMLMediaElement.cpp#L1254
>
> Maybe we'll end up firing events based on frame updates for video, and
> something arbitrary for audio (as it is today).
I strongly recommend making the ontimeupdate rate be sensitive to system
load, and no faster than one frame per second.
On Fri, 6 Nov 2009, Philip Jägenstedt wrote:
>
> We've considered firing it for each frame, but there is one problem. If
> people expect that it fires once per frame they will probably write
> scripts which do frame-based animations by moving things n pixels per
> frame or similar. Some animations are just easier to do this way, so
> there's no reason to think that people won't do it. This will break
> horribly if a browser is ever forced to drop a frame, which is going to
> happen on slower machines. In balance this may or may not be a risk
> worth taking.
I strongly agree with this.
On Sat, 7 Nov 2009, Jonas Sicking wrote:
>
> When timeupdate was added, the stated goal was actually as a battery
> saving feature for for example mobile devices. The idea was that the
> implementation could scale back how often it fired the event in order to
> save battery.
Indeed.
> Now that we have implementation experience, is timeupdate fulfilling
> this goal? If not, is it fulfilling any other goals making it worth
> keeping?
On Sat, 7 Nov 2009, Justin Dolske wrote:
>
> FWIW, I felt that having Firefox's default video controls update their
> state for every frame was excessive (and could lead to competing for the
> CPU with the video itself). So, the controls basically ignore timeupdate
> events that occur within .333 seconds of the last timeupdate position...
> Which leads to having a bit of complication to deal with edge cases like
> having the video end less than .333 seconds after the last timeupdate
> event (otherwise the UI might look like stuck shortly before the end of
> the video).
>
> At least for my needs, having an event fire at ~3 Hz (and when special
> things happen, like a seek or the video ending) would be somewhat
> simpler and more efficient.
3Hz seems a little slow for the timer -- you'd want at least 10Hz so you
can show a tenths-of-a-second timer. More than that seems pointless
though.
On Sat, 7 Nov 2009, Silvia Pfeiffer wrote:
>
> I use timeupdate to register a callback that will update
> captions/subtitles.
That's only a temporary situation, though, so it shouldn't inform our
decision. We should in due course develop much better solutions for
captions and time-synchronised animations.
On Sat, 7 Nov 2009, Robert O'Callahan wrote:
>
> Hmm. Why would you want timeupdate to fire more often than once per
> frame?
On Fri, 6 Nov 2009, Andrew Scherkus wrote:
>
> If you tie progress bar animation to timeupdate, the position will
> update in larger steps depending on the framerate of the video. i.e., a
> 10 second clip at 1fps will animate in 10 large steps.
>
> If your video subsystem returns a wall clock or some continuously
> increasing time source, the 250ms update will at least give you a
> smoother animating progress bar. i.e., a 10 second clip at 1fps will
> animate in 40 smaller steps.
timeupdate's primary use case is indeed this UI, which is why the spec
requires a 4Hz minimum even if the frame rate is slower.
On Fri, 6 Nov 2009, Brian Campbell wrote:
>
> Our major use case is actually synchronizing bullets, slide changes, and
> the like with video, in educational multimedia produced with high
> production values.
For this timeupdate is terrible. You need something like the old cuerange
interface, and we'll introduce something for this in the next version for
sure, along with captions support. All we're waiting for is for
implementations to be of high enough quality that the existing spec can be
reliably used by authors.
On Wed, 11 Nov 2009, Philip Jägenstedt wrote:
>
> Since we are no longer using progress events for media elements we don't
> have the external requirement that abort/error shouldn't bubble. I'd
> like them to bubble, because:
>
> 1. error events fired on <source> will bubble to <video>, which is quite
> useful if one doesn't particularly care which source failed (one need
> not register an event handler on each individual source attribute)
>
> 2. Implementors don't have to deal with the possibility that events of
> the same name and type sometimes bubbles and sometimes not.
>
> 3. It's the same as for <img>, which all else equals seems nice and
> simple.
>
> I'll note that <video> abort/error events in Firefox already seems to
> bubble while they apparently don't in Safari. We'd like to align with
> Firefox and have the spec changed.
On Thu, 14 Jan 2010, Philip Jägenstedt wrote:
>
> It looks like I was wrong. As far as I can see error/abort doesn't
> bubble in any other scenario and it seemed to be that way in Firefox
> because the error event is fired on the <video> element, or something.
> No spec change needed.
No spec change done!
On Sat, 28 Nov 2009, Philip Jägenstedt wrote:
>
> As part of the work in the W3C HTML Accessibility Task Force I have
> proposed a new <overlay> element to handle several use cases which are
> currently not solved by HTML5 <video>.
>
> http://wiki.whatwg.org/wiki/Video_Overlay
>
> Certainly we shouldn't be adding this to HTML5 at this point, but I
> think HTML6 and beyond is something the WHATWG should be involved with.
There are many proposals in this area. I'm just waiting for
implementations of the existing stuff to be solid. Captions and cue ranges
are the next thing on the list.
On Sat, 5 Dec 2009, Keith Bauer wrote:
>
> It looks in the current draft spec as if audio is not pannable, and from
> Googling it looks like this was at one point considered, but I can't
> find an explanation as to what happened between then and now.
>
> Obviously panning is problematic for stereo audio, but with Canvas and
> WebGL making browser games more possible, having the ability to pan at
> least mono audio seems like a worthwhile addition.
This may make sense in a future version, but it doesn't seem critical at
this point where we don't even have captions in the spec. :-)
> Or is WebGL to be followed shortly by WebAL ;)
That's not that unrealistic, actually. It might even make sense to have
WebGL have built-in support for 3D audio. I would recommend asking this on
the relevant Khronos list.
On Sat, 12 Dec 2009, Hugh Guiney wrote:
>
> So, in my first foray into preparing Theora/Vorbis content, for use with
> <video>, I realized that I wasn't sure with what settings to encode my
> materials. Should I:
>
> A.) Supply my visitors with the best possible quality at the expense of
> loading/playback speed for people on slower connections
>
> B.) Just account for the lowest common denominator and give everyone a
> low quality encode
>
> or
>
> C.) Go halfway and present a medium quality encode acceptable for "most
> people"?
>
> A. is not legacy-proof, B. is not future-proof, and the C. is neither.
> C. may sound like the most sensible solution, but even if I were to put
> up something that worked for "most people" *right now*, as computers
> become more capable and connections become faster, more visitors are
> going to want higher-quality videos, meaning I'd have to stay on top of
> the relevant trends and update my pages accordingly.
>
> Ideally, I would like to be able to simply encode a few different
> quality variations of the same file and serve each version to its
> corresponding audience.
>
> There are a few ways I could do this. One of the most obvious ways would
> be to present different versions of the site, e.g. one for "slow
> connections" and one for "fast connections" and have the user pick via a
> splash page before entering, as was popular in '90s. But this is almost
> certainly a faux pas today: it puts a wall between the user and my
> content, and requires me to maintain two different versions of the site.
> Hardly efficient.
>
> Another way would be to itemize each version of the file in a list, with
> details next to them such as frame and file size, so the user could pick
> accordingly. While this would probably be fine for downloads, it
> completely defeats the point of embedded media.
>
> Alternatively, I could devise a script that prompts users for their
> connection speed and/or quality preference, which (assuming they know
> it) would then go through the available resources on the server and
> return the version of the file I'd have allocated to that particular
> response. But that would require either branching for every file
> alternative of every video on my site in the scriptor specifying the
> quality in some other way that can be programmatically exploited;
> perhaps using microdata, but then I'd be stuffing the fallback content
> with name-value pairs, which isn't particularly accessible.
>
> Or, I could invent my own HTTP header and try to get everyone to use it.
> Which is a lot to do for something like this, and isn't guaranteed to
> work.
>
> None of these options seem particularly viable to me. Right now, the
> HTML5 spec allows UAs to choose between multiple versions of a media
> resource based on type. In the interest of making media more accessible
> to users of varying bandwidth and processing power, and easier to
> maintain for authors, I propose allowing the relative quality of each
> resource to be specified for multiple-source media.
>
> You will notice that in Flash animations, there is a context menu option
> to change the rendered quality between "High", "Medium", and "Low". Each
> setting degrades or upgrades the picture, and requires less or more
> computing power to process respectively. Additionally, some Flash video
> authors elect to construct their own quality selection UI/scripting
> within the video itself, allowing them to have a finer degree of control
> over the presentation of the image.
>
> Similarly, YouTube has the ability to switch between standard quality,
> high quality, and high definition videos based on users' preferences. In
> the "Playback Setup" section of "Account Settings", you will find the
> following options:
>
> "Video Playback Quality
> Choose the default setting for viewing videos
> * Choose my video quality dynamically based on the current connection speed.
> * I have a slow connection. Never play higher-quality video.
> * I have a fast connection. Always play higher-quality video when it's
> available."
>
> If HTML video is to compete with Flash, or become implemented on as
> wide a scale as YouTube <http://www.youtube.com/html5>, it makes sense
> to allow for some sort of quality choice mechanism, as users will have
> come to expect that functionality.
>
> This could be done by allowing an attribute on <source> elements that
> takes a relative value, such as (or similar to) those specified in
> HTTP <http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.9>.
> This attribute could be called "quality" or "qvalue" or just "q" (my
> personal preference would be it that order decreasing), and be used as
> such:
>
> <video controls>
> <source src='video-hd.ogv' quality='1.0' type='video/ogg;
> codecs="theora, vorbis"'>
> <source src='video-hq.ogv' quality='0.5' type='video/ogg;
> codecs="theora, vorbis"'>
> <source src='video-sd.ogv' type='video/ogg; codecs="theora, vorbis"'>
> </video>
>
> In this case, video-hd.ogv (a high definition encode) would be the
> author's preferred version, video-hq.ogv (a high quality standard
> definition encode) would be less preferred than video-hd.ogv, but more
> preferred than video-sd, and video-sd (a standard definition encode)
> would be less preferred than both, since it lacks a quality attribute
> and would thus be the equivalent of specifying "quality='0.001'".
>
> The UA could then have a playback setup that would allow the user to
> specify how it should handle content negotiation for multiple-source
> media. This could be based solely on the quality attribute if provided,
> or if @type is also provided, also based on what content-type the user
> prefers.
Thank you for this detailed problem description and discussion of a
suggested solution.
I think my recommendation would be something similar to what you suggest
above regarding an HTTP header, but more specific to the Content-Type
header: a new MIME parameter similar to "codecs" that describes the power
needed for playback, in terms of network bandwidth, CPU, etc. This could
just be boiled down to a number, e.g. "1" for today's "low" and "2" for
today's "high", with the number being increased over the years as we get
better and better.
Alternatively, we could extend Media Queries to specify the kind of CPU
and bandwidth expected to be needed for a media resource. This would fit
right into the Media Queries model.
Or, of course, we could add an attribute to <source>, as you suggest.
The best thing to do is to approach browser vendors directly (e.g. on
their relevant mailing lists, like webkit-dev for WebKit, or the Mozilla
newsgroups for Firefox), and see if they would be interested in doing
something like this. The WHATWG FAQ gives some detail on this:
http://wiki.whatwg.org/wiki/FAQ#Is_there_a_process_for_adding_new_features_to_a_specification.3F
On Fri, 29 Jan 2010, Robert O'Callahan wrote:
>
> 1) Should be convenient for authors to make any element in a page display
> fullscreen
> 2) Should support in-page activation UI for discoverability
> 3) Should support changing the layout of the element when you enter/exit
> fullscreen mode. For example, authors probably want some controls to be
> fixed size while other content fills the screen.
> 4) Should accommodate potential UA security concerns, e.g. by allowing the
> transition to fullscreen mode to happen asynchronously after the user has
> confirmed permission
>
> *** WARNING: totally half-baked proposal ahead! ***
>
> New API for all elements:
> void enterFullscreen(optional boolean enableKeys);
> void exitFullscreen();
> boolean attribute supportsFullscreen;
> boolean attribute displayingFullscreen;
> "beginfullscreen" and "endfullscreen" events
>
> While an element is fullscreen, the UA imposes CSS style "position:fixed;
> left:0; top:0; right:0; bottom:0" on the element and aligns the viewport of
> its DOM window with the screen. Only the element and its children are
> rendered, as a single CSS stacking context.
>
> enterFullscreen always returns immediately. If fullscreen mode is
> currently supported and permitted, enterFullscreen dispatches a task
> that a) imposes the fullscreen style, b) fires the beginfullscreen event
> on the element and c) actually initiates fullscreen display of the
> element. The UA may asynchronously display confirmation UI and dispatch
> the task when the user has confirmed (or never).
>
> The enableKeys parameter to enterFullscreen is a hint to the UA that the
> application would like to be able to receive arbitrary keyboard input.
> Otherwise the UA is likely to disable alphanumeric keyboard input. If
> enableKeys is specified, the UA might require more severe confirmation
> UI.
>
> In principle a UA could support multiple elements in fullscreen mode at
> the same time (e.g., if the user has multiple screens).
>
> enterFullscreen would throw an exception if fullscreen was definitely
> not going to happen for this element due to not being supported or
> currently permitted, or if all screens are already occupied.
>
> supportsFullscreen returns false if it's impossible for this element to
> ever be shown fullscreen. It does not reveal whether permission will be
> granted.
What's the case where supportsFullscreen would be false?
On Sat, 30 Jan 2010, Robert O'Callahan wrote:
>
> So how about a Window API with an optional element component:
> void enterFullscreen(optional DOMElement element, optional boolean
> enableKeys);
> void exitFullscreen();
> boolean attribute supportsFullscreen;
> boolean attribute displayingFullscreen;
> "beginfullscreen" and "endfullscreen" events
>
> Where "beginfullscreen" and "endfullscreen" are targeted at the element if
> one was provided, or else at the window, and bubble. While a window is
> fullscreen, the root element and the designated fullscreen element, if any,
> are given a pseudoclass "fullscreen". Then you can have some default rules
> in the UA style sheet:
> *:root:fullscreen { overflow:hidden; }
> *:not(:root):fullscreen { position:fixed; left:0; top:0; bottom:0; right:0;
> }
That seems like more than necessary. Why not just let the author do the
element stuff? The above proposal doesn't do anything that the author
can't already do, right?
On Sat, 30 Jan 2010, Anne van Kesteren wrote:
>
> To stop polluting the Window object, might it make sense to put the new
> members (other than event handler attributes) on window.screen?
Seems reasonable.
On Sat, 30 Jan 2010, Simon Fraser wrote:
>
> I'd like to start a discussion on steps that the UA may take to mitigate
> the risks of using the fullscreen API for phishing attacks. I'm not sure
> how much should be required of UAs in the spec, but I could imagine that
> steps that the UA could take may include some or all of the following:
>
> * require that enterFullscreen() is being called inside a user-event
> handler (e.g. click or keypress) to avoid drive-by fullscreen
> annoyances.
> * drop out of fullscreen if navigating to another page
> * disallow window.open etc. while in fullscreen
These seem like reasonable requirements. (Navigation restrictions maybe
should be limited to cross-origin navigation.)
> * show an animation as the window enters fullscreen so the user can see
> the transition taking place
> * show an hard-to-spoof overlay with some text that tells the user that
> they can use the Escape key to exit fullscreen, and prevent the page
> from capturing this keypress.
> * show an affordance to allow the user to exit fullscreen (e.g. a close
> button) when the user moves the mouse
> * limit arbitrary keyboard input unless 'enableKeys' is true
These seem like reasonable UA features.
> * make the location field available to the user so that they can see the
> URL even when in fullscreen
I think we'd want this to only be visible when the mouse moved, I wouldn't
want to watch a movie with a location bar the whole time :-)
> * disallow enterFullscreen() from a frame or iframe
That seems like a bad idea because it's expected that embedded video
players will be in iframes.
> * if focussed on an element, drop out of fullscreen if that element is
> removed from the DOM
I'm not convinced we need to make the API element-specific.
On Tue, 2 Feb 2010, Robert O'Callahan wrote:
>
> However, I'd very very reluctant to allow subframes to go fullscreen by
> default. I haven't got any specific attack scenarios in mind, but it
> seems to add to the power of clickjacking, which is the last thing we
> need.
Could you elaborate on this? It seems like this would only let pages make
other pages full-screen... but they can already do that by making
themselves full-screen and showing the inner iframe.
On Mon, 1 Feb 2010, Brian Campbell wrote:
>
> I think it would be best to immediately go as full screen as possible
> (so, full window if permission hasn't yet been given), and then resize
> to full screen if permission is granted. This will avoid content authors
> having to duplicate that same functionality themselves for their users
> that don't ever give or deny permission.
We can do that with an API that just does page-wide fullscreen -- when the
page requests fullscreen mode, it makes the relevant bit take the full
width of the page, and then only if the user agrees to fullscreen does the
window actually go fullscreen.
> Resizing when in full screen mode will need to be implemented anyhow, to
> support devices like the iPhone or iPad which can change orientation and
> will need to reshape the screen.
Indeed. Generally this is free (CSS will just handle it automatically).
> No, you can't stop someone who is truly dedicated from guessing based on
> the exact size. My concern is more with authors who feel that their
> content is best displayed in full screen, and so may simply refuse to
> play it until they've gotten the fullscreen event or have the fullscreen
> pseudoclass. That would be pretty easy to implement, if you have that
> functionality available to you. I know my previous director would have
> requested it; he is very particular about content being displayed in
> full screen, and while I would argue that we shouldn't lock people out
> who don't want to be in full screen mode, I may have been overruled if
> such functionality were available and so easy to use.
Yeah... it might be ok to have only the "exit full screen" event and have
it trigger just when the user declines or exits? That way if the user does
nothing, the page can't know, and it'll just render "full window" rather
than "full screen".
It seems like this API is best put on the Screen object, which I believe
effectively means it belongs in the CSSOM spec and not the HTML spec.
Anne, is this something you are willing to spec?
On Mon, 25 Jan 2010, Simon Pieters wrote:
> >
> > + <p>If the user agent is still performing the previous iteration of
> > + the sequence (if any) when the next iteration becomes due, the
> > + user agent must not execute the overdue iteration, effectively
> > + "skipping missed frames" of the drag-and-drop operation.</p>
>
> Should timeupdate also "skip missed frames"? (I think Firefox does as a
> consequence of skipping frames while script is running and firing
> timeupdate for each frame. Opera currently queues up the events, IIRC.)
Done.
On Thu, 4 Feb 2010, Yaar Schnitman wrote:
>
> According to [1], the video's width & height attributes are DOMString, but
> according to [2] width & height "must have values that are valid
> non-negative integers".
> Shouldn't they be long then?
>
> Digging deeper, I found that video, iframe, embed and object all have
> DOMString width & height attributes, but img specifies width & height to be
> long [3]. For consistency, shouldn't all of them be the same?
HTMLImageElement is different for historical reasons. I made <video>'s
height and width be DOMStrings for consistency with the rest.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
More information about the whatwg
mailing list