[whatwg] <video>/<audio> feedback

Ian Hickson ian at hixie.ch
Thu Apr 30 00:14:06 PDT 2009

On Fri, 10 Apr 2009, Robert O'Callahan wrote:
> Media element state changes, such as readyState changes, trigger 
> asynchronous events. When the event handler actually runs, the element 
> state might have already changed again. For example, it's quite possible 
> for readyState to change to HAVE_ENOUGH_DATA, a canplaythrough event is 
> scheduled, then the readyState changes to HAVE_CURRENT_DATA, then the 
> canplaythrough event handler runs and may be surprised to find that the 
> state is not HAVE_ENOUGH_DATA.

Yeah. Not sure what to do about this.

> A related surprise is that although a media element delays the document 
> load event until the readyState reaches HAVE_CURRENT_DATA, it is 
> possible for a loadeddata event handler to actually run after the 
> document load event handler.

That's true, because the media element's events are all fired on the 
element's own task source, and are therefore not guaranteed to be ordered 
with respect to the DOM manipulation task source (which is used for the 
document-wide 'load' event).

The reason for this is that we don't want to have to guarentee the order 
of events between two <video> elements, so they can't use the same task 
source, but we _do_ want to make sure that events from a particular media 
element are ordered with respect to each other.

Again, I'm not sure what to do about this.

> An obvious approach to avoid these surprises is to arrange for the state 
> changes to not be reflected in the element until the associated event 
> actually fires. That's a problem if you apply it generally, though. If 
> you delay changes to the 'currentTime' attribute until the associated 
> timeupdate event runs, either 'currentTime' does not reflect what is 
> actually being displayed or your video playback depends on timely JS 
> event execution --- both of those options are unacceptable. And allowing 
> 'currentTime' to advance while the readyState is still at 
> HAVE_CURRENT_DATA seems like it could be confusing too.


On Thu, 9 Apr 2009, Boris Zbarsky wrote:
> For what it's worth, there are similar situations elsewhere.  For 
> example, the currently proposed spec for stylesheet load events says 
> those fire asynchronously, so it looks to me like they could fire after 
> onload.

Actually the way this is defined they will always fire before the main 
load event (the events are both fired on the same task source, so their 
ordering is defined).

On Sat, 18 Apr 2009, Biju wrote:
> from https://bugzilla.mozilla.org/show_bug.cgi?id=480376
> > It's not too uncommon for videos to have no audio track. It would be 
> > really nice if the video controls could indicate this, so that users 
> > know why there's no sound ("is something broken? is my volume too low? 
> > wtf?").
> >
> > Unfortunately this info isn't available through the media element API, 
> > so this would need to be added to the HTML5 spec. The simplest way to 
> > expose this would be as |readonly boolean hasAudio|. Is the media 
> > backend capable of determining this this?
> we need a hasAudio JS only property for video element

The notes in the spec for the next version of the API mention:

    * hasAudio, hasVideo, hasCaptions, etc

On Sat, 18 Apr 2009, Biju wrote:
> if a video element is already playing/loaded video from URL
> http://mysite.com/aaaaa.ogg
> and if we want play another file http://mysite.com/bbbbb.ogg
> we should do following JS code
>      v = $('v1');
>      v.src = "http://mysite.com/bbbbb.ogg";
>      v.load();
>      v.play();
> Why cant it be as simple as
>    v = $('v1');
>    v.play("http://mysite.com/bbbbb.ogg");
> Similarly for load
>    v = $('v1');
>    v.load("http://mysite.com/bbbbb.ogg");

Is saving two lines really that big of a deal?

On Mon, 20 Apr 2009, Philip Jägenstedt wrote:
> Since static markup uses the src attribute it needs to be supported via 
> the DOM, so adding a parameter to load/play would only mean that there 
> are several ways to do the same thing. I'm not sure replacing an already 
> playing media resource is an important enough use case to make such a 
> change to an already complex spec.


On Mon, 20 Apr 2009, Biju wrote:
> I did not mean to remove option for assigning to .src property.

That's the problem. It would increase the size of the API.

> This will make web developers work easier. ie, the js code will become 
> 1/3 most time for the same operation.

If saving two lines in this case is that much of a big deal, I recommend 
writing a wrapper function that takes an ID and a URL and finds the 
relevant video element, updates the src="", and reloads it. Then it's just 
one line of code. Problem solved. :-)

On Mon, 20 Apr 2009, Biju wrote:
> I am sorry if I am missing something, how do adding it make spec 
> complex.

Anything we add makes the spec more complex. Two API members is more than 
one API member.

> So remaining logic is only
> HTMLVideoElement.prototype.newPlay =
> function newPlay(url){
>   if(arguments.length) this.src = url;
>   this.load();
>   this.play();
> }

See, no need for the browsers to do it. :-)

On Sun, 19 Apr 2009, Biju wrote:
> For video tags we need events "oncontrolson" and ""oncontrolsoff" This 
> will be useful if web page author want to hide his interface when native 
> controls becomes active.
> Example where it is useful
> 1. Get firefox trunk build
> 2. go to http://people.opera.com/howcome/2007/video/opacity.html
> 3. right on video, to get the context menu
> 4. select "Show Controls"
> Result
> The JS controls provided by web page overlaps native contols.

I've noted this as a feature request for a future version. It may be that 
browsers change their controls so that this is unnecessary (e.g. by 
always going on top of everything).

On Thu, 30 Apr 2009, Silvia Pfeiffer wrote:
> > On Wed, 8 Apr 2009, Silvia Pfeiffer wrote:
> >>
> >> Note that in the Media Fragment working group even the specification 
> >> of http://www.example.com/t.mov#time="10s-20s" may mean that only the 
> >> requested 10s clip is delivered, especially if all the involved 
> >> instances in the exchange understand media fragment URIs.
> >
> > That doesn't seem possible since fragments aren't sent to the server.
> The current WD of the Media Fragments WG
> http://www.w3.org/2008/WebVideo/Fragments/WD-media-fragments-reqs/
> specifies that a URL that looks like this
> http://www.w3.org/2008/WebVideo/Fragments/media/fragf2f.mp4#t=12,21
> is to be resolved on the server through the following basic process:
> 1. UA chops off the fragment and turns it into a HTTP GET request with
> a newly introduced time range header
> e.g.
> GET /2008/WebVideo/Fragments/media/fragf2f.mp4 HTTP/1.1
> Host: www.w3.org
> Accept: video/*
> Range: seconds=12-21
> 2. The server slices the multimedia resource by mapping the seconds to
> bytes and extracting a playable resource (potentially fixing container
> headers). The server will then reply with the closest inclusive range
> in a 206 HTTP response:
> e.g.
> HTTP/1.1 206 Partial Content
> Accept-Ranges: bytes, seconds
> Content-Length: 3571437
> Content-Type: video/mp4
> Content-Range: seconds 11.85-21.16

That seems quite reasonable, assuming the UA is allowed to seek to other 
parts of the video also.

> > On Thu, 9 Apr 2009, Jonas Sicking wrote:
> >>
> >> If we look at how fragment identifiers work in web pages today, a 
> >> link such as
> >>
> >> http://example.com/page.html#target
> >>
> >> this displays the 'target' part of the page, but lets the user scroll 
> >> to anywhere in the resource. This feels to me like it maps fairly 
> >> well to
> >>
> >> http://example.com/video.ogg#t=5s
> >>
> >> displaying the selected frame, but displaying a timeline for the full 
> >> video and allowing the user to directly go to any position.
> >
> > Agreed. This is how the spec works now.
> This is also how we did it with Ogg and temporal URIs, but this is not 
> the way in which the standard for media fragment URIs will work.

It sounds like it is. I don't understand the difference.

> >> But I also agree that there is a use case for directing the user to a
> >> specific range of the video, such as your 30 second clip out of 5 hour
> >> video example. Maybe this could be done with syntax like
> >>
> >> http://example.com/video.ogg#r=3600s-3630s
> >
> > Currently the spec has no way to indicate a stop time from the fragment
> > identifier or other out-of-band information, but I agree that we might
> > need to add something like that (e.g. implying a default cue range with
> > autopause-on-exit enabled) at some point.
> The WD of the Media Fragment WG has a stop time in it. We might want
> to add a stopTime DOM attribute.

I haven't added this yet, but I agree that we might want to look into 
this at some future point.

Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

More information about the whatwg mailing list