[whatwg] Extending HTML 5 video for adaptive streaming
Aaron Colwell
acolwell at google.com
Fri Jul 1 12:06:47 PDT 2011
Hi Bob,
Comments inline
On Fri, Jul 1, 2011 at 8:40 AM, Bob Lund <B.Lund at cablelabs.com> wrote:
> Hi Aaron,
>
> Here are some other aspects of script controlled adaptive bit rate that
> occur to me, perhaps you have already considered these.
>
> 1) I guess script will be responsible for maintaining its own playback
> buffer, monitoring buffer behavior and selecting the appropriate bit rate
> for new fragments. Are there any other network related events/metrics script
> might need to determine which bit-rate to fetch for the next segment? Is
> there any other information from the user agent about playback performance
> that script might need?
>
>
The script would be responsible for managing buffering. It can use the
currentTime & buffered attributes on the video tag to monitor the
consumption of the data passed in via appendData(). I believe the attributes
being proposed in the video metrics
proposal<http://wiki.whatwg.org/wiki/Video_Metrics#Proposal> could
also be helpful. Right now I'm just using XMLHttpRequest to fetch WebM
clusters and measuring how long it takes to fetch them to create a bandwidth
estimate. I haven't spent much time on the BW measurement & adaptation
algorithms yet. I'm just trying to nail down mechanism for passing the media
data to the browser first.
> 2) If a media resource is a multi-track resource then it would seem script
> will also have to fetch fragments for those tracks which implies that the
> audio element would need the append method. Timed text tracks would also
> need to be processed and Cues appended.
>
>
The idea is that appendData() can receive media for multiple tracks. In the
case of WebM each cluster can have blocks from different tracks multiplexed
together. The initial stream config information contains the the track
mappings necessary to demux the cluster. I was also planning to allow both
multiplexed and demultiplexed clusters. Cluster timecodes must be in
monotonically increasing order, but it would be possible to call
appendData() with an cluster with only audio data followed by a cluster with
only video data. This would allow straight forward support for deployments
where audio & video tracks for a single presentation are in separate WebM
files.
> There is a new media pipeline task force in the Web and TV IG (
> http://www.w3.org/2011/webtv/wiki/MPTF) that is also planning to examine
> this topic. You may want to participate.
>
>
I have signed up to the mailing list and will take some time to catch up
with the archives.
Thanks for your comments.
Aaron
More information about the whatwg
mailing list