[whatwg] <video> feedback

Ian Hickson ian at hixie.ch
Wed Mar 25 03:16:32 PDT 2009


On Wed, 4 Mar 2009, Chris Pearce wrote:
>
> The media element spec says:
>
> > If a media element whose |networkState| has the value |NETWORK_EMPTY| 
> > is inserted into a document, the user agent must asynchronously invoke 
> > the media element's resource selection algorithm.
>
> The resource selection algorithm then goes on to set the 
> delaying-the-load-event flag to true. Depending on how the asynchronous 
> invocation is implemented, the document could actually complete loading 
> during the time after the insertion of a media element, but before the 
> resource-selection algorithm sets the delaying-the-load-event flag is 
> set to true. This means the load event could fire during that time, even 
> though we intended to delay the load event.
> 
> Maybe we should set the delaying-the-load-event flag to true before we 
> asynchronously call the resource-selection algorithm, and then then 
> resource-selection algorithm can set the delaying-the-load-event flag to 
> false if that decides it needs to wait for a src or source element 
> child?

I've fixed this (though not quite as you describe, for simplicity's sake).


On Fri, 6 Mar 2009, Chris Pearce wrote:
>
> There's an additional problem with the current media load algorithm 
> spec, it's possible to cause two resource-selection asynchronous calls 
> to run in parallel with the following javascript:
> 
> var v = document.createElement("video");
> v.src = "foo.ogg";
> v.load();
> document.body.appendChild(v);
> 
> The load() method will asynchronously invoke the media element's 
> resource selection algorithm, and if the resource selection algorithm 
> doesn't execute fast enough in the background to change the 
> networkState, when we add the video to the document and the networkState 
> is still NETWORK_EMPTY, the add-to-a-document code will asynchronously 
> invoke the resource selection algorithm again.

I fixed this along with the earlier problem.


On Thu, 5 Mar 2009, Robert O'Callahan wrote:
> On Thu, Feb 26, 2009 at 10:19 PM, Ian Hickson <ian at hixie.ch> wrote:
> > On Wed, 25 Feb 2009, Robert O'Callahan wrote:
> > >
> > > Under "Once enough of the media data has been fetched to determine 
> > > the duration of the media resource, its dimensions, and other 
> > > metadata", after setting the state to HAVE_METADATA, steps 7 and 8 
> > > say
> > >
> > > > 7. Set the element's delaying-the-load-event flag to false. This 
> > > > stops delaying the load event.
> > > >
> > > > 8. This is the point at which a user agent that is attempting to 
> > > > reduce network usage while still fetching the metadata for each 
> > > > media resource would stop buffering, causing the networkState 
> > > > attribute to switch to the NETWORK_IDLE value, if the media 
> > > > element did not have an autobuffer or autoplay attribute.
> > >
> > > I suggested HAVE_CURRENT_DATA would be a better state for these 
> > > actions, and I still think so. These actions should not occur until 
> > > the UA is able to display the first frame of the video. Authors 
> > > would want the first frame of a non-autobuffered video to be 
> > > visible, and the document load event should fire after the first 
> > > frame is available by analogy with images.
> >
> > I've updated the note as per your suggestion.
> 
> In step 7 you still stop delaying the load event after loading metadata. 
> I still say we should keep delaying the load event until we reach 
> HAVE_CURRENT_DATA.

Man, I suck at this. Fixed. Again. For real this time I hope.


On Fri, 13 Mar 2009, Matthew Gregan wrote:
> 
> It's possible that neither a 'play' nor 'playing' event will be fired 
> when a media element that has ended playback is played again.  When 
> first played, paused is set to false.  When played again, playback has 
> ended, so play() seeks to the beginning, but paused does not change (as 
> it's already false), so the substeps that may fire play or playing are 
> not run.

'play' shouldn't fire, since it was never paused.

'playing' should fire, though, since the readyState will have dropped down 
to HAVE_CURRENT_DATA when the clip is ended, and will drop back up to 
HAVE_FUTURE_DATA after seeking.


> This behaviour seems reasonable if the media element has a loop 
> attribute, since playback never really stops (as it restarts immediately 
> upon ending).

Actually the 'playing' event fires even in this case, which is a bit 
weird. Not sure how to suppress it, though, since if seeking takes 
non-zero time, we shouldn't suppress it after all.


On Sun, 15 Mar 2009, Biju wrote:
>
> What I understood from
> http://www.whatwg.org/specs/web-apps/current-work/multipage/video.html#time-ranges
> following will be the syntax to access video elements buffered time-ranges
> 
> v1.buffered.length
> v1.buffered.start(i)
> v1.buffered.end(i)
> 
> When I compare it with existing syntax like
> 
> document.links.length
> document.links[0].href
> 
> document.images.length
> document.images[0].src
> 
> window.frames.length
> window.frames[0].history
> 
> I feel it should have been of the pattern
> 
> v1.buffered.length
> v1.buffered[i].start
> v1.buffered[i].end
> 
> So why we changed the syntax for time-ranges?
> (ie, buffered, seekable and played)

With .links, .images, and .frames, what we are returning are objects that 
exist independent of the collections.

This is more like NameList in DOM Level 3 Core, which has:

 list.getName(i);
 list.getNamespaceURI(i);

...instead of:

 list[i].name;
 list[i].namespaceURI;

The reason to do it this way is to avoid having to create one object per 
range (or name/uri pair, in the NameList case). This saves memory and CPU, 
and generally makes things more resilient (e.g. you don't have to worry 
about garbage collecting items from the array separate from the array).


On Tue, 17 Mar 2009, Nathanael Ritz wrote:
> 
> My observation is that if it is not already recommended in the spec, 
> browsers should offer some sort of indication it could not use whatever 
> file format it was supplied or received something like a 404 from the 
> server. Whether it's a broken video box icon, or automatically reverting 
> to the fallback content within the <video> element (if it exists), I 
> think there should be some sort of warning or indication of a failure.

As the spec says:

# In addition to the above, the user agent may provide messages to the 
# user (such as "buffering", "no video loaded", "error", or more detailed 
# information) by overlaying text or icons on the video or other areas of 
# the element's playback area, or in another appropriate manner.


On Wed, 18 Mar 2009, Nathanael Ritz wrote:
>
> I found this in the draft that says:
> 
> "User agents that cannot render the video may instead make the element 
> represent a link to an external video playback utility or to the video 
> data itself."
> 
> But that seems fairly weak. Why not "should" or "must"?

Consider the program "wget", which is an HTML user agent. It would not be 
very useful if it showed a link to an external video playback utility.


> Could the language not be updated here to include showing fallback 
> content (along side an alert) when video cannot be found or rendered?

If the video can't be found, why would the fallback content work better? 
If there are multiple videos, use the <source> element to list them. Once 
we have a standard codec, there should be no reason to ever have a video 
that can't be rendered.


On Wed, 18 Mar 2009, Nathanael Ritz wrote:
>
> I propose that the video and audio elements have some sort of fallback. 
> Not for accessibility purposes, as that point has been addressed in the 
> spec. But so that it is clear there was supposed to be a video (or 
> audio) resource that for whatever reason can't be seen. Isn't the 
> especially important considering the current debate about .ogg vs other 
> formats as the standard?

The <source> element provides the fallback.


On Wed, 18 Mar 2009, Nathanael Ritz wrote:
>
> The comment "it is the page author’s responsibility to provide at least 
> one video source that is guaranteed to work." is not realistic. How is 
> an author supposed to guarantee a video works across all systems and 
> platforms? What if the video does work and eventually the resource is 
> removed for whatever reason?

Once we have a standard codec, that will be how you can guarantee that the 
video will work everywhere.

If the video is removed from the hosting site, then the fallback is 
unlikely to be much use either.

If all you want to show is an error message, the browser should do that.


On Wed, 18 Mar 2009, Kristof Zelechovski wrote:
>
> I would expect video type text/html to work everywhere for fallback; the 
> text can contain an error message.

text/html isn't a video format, and it is not expected that it would be 
supported.


On Thu, 19 Mar 2009, Robert O'Callahan wrote:
> 
> It actually might be interesting to specify that resource types that the 
> browser knows how to handle itself should be usable in <video>, which 
> would then behave much like <object>.

That sounds like a pretty convincing argument for not doing it. :-)


On Mon, 23 Mar 2009, Emil Tin wrote:
> 
> i understand that SVG is meant for advanced timing etc.
> 
> but it would be very useful to have a simple mechanism in 
> html/javascript for playing sounds together. conceptually, sounds would 
> be placed on a timeline at a certain time. the sounds on the timeline 
> can then be played back together and will start at the right times.

The lack of this feature at this time is intentional. We may add this in 
the future, once the browsers implement what we have now in a reliable 
manner, but in the meantime I recommend using SMIL for this purpose.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


More information about the whatwg mailing list