[whatwg] Proposal for a MediaSource API that allows sending media data to a HTMLMediaElement

Aaron Colwell acolwell at google.com
Tue Jul 12 16:30:28 PDT 2011

On Tue, Jul 12, 2011 at 4:17 PM, Robert O'Callahan <robert at ocallahan.org>wrote:

> On Wed, Jul 13, 2011 at 11:14 AM, Aaron Colwell <acolwell at google.com>wrote:
>> I'm open to that. In fact that is how my current prototype is implemented
>> because it was the least painful way to test these ideas in WebKit. My
>> prototype only implements append() and uses existing media element events as
>> proxies for the events I've proposed. I only separated this out into a
>> separate object because I thought people might prefer an object to represent
>> the source of the media and leave the media element object an endpoint for
>> controlling media playback.
> We're kinda stuck with media elements handling both playback endpoints and
> resource loading.

Ok.  This makes implementation in WebKit easier for me so I won't push to
hard to keep it separate from the media element. :)

>>> Do you need to support seeking in with this API? That's hard. It would be
>>> simpler if we didn't have to support seeking. Instead of seeking you could
>>> just open a new stream and pour data in for the new offset.
>>  I'd like to be able to support seeking so you can use this mechanism for
>> on-demand playback. In my prototype seeking wasn't too difficult to
>> implement. I just triggered it off the seeking event. Any append() that
>> happens after the seeking event fires is associated with the new seek
>> location. currentTime is updated with the timestamp in the first cluster
>> passed to append() after the seeking event fires. Once the media engine has
>> this timestamp and enough preroll data, then it will fire the seeked event
>> like normal. I haven't tested this with rapid fire seeking yet, but I think
>> this mechanism should work.
> How do you communicate the data offset that the element wants to read at
> over to the script that provides the data? In general you can't know the
> strategy the decoder/demuxer uses for seeking, so you don't know what data
> it will request.

I'm doing WebM demuxing and media fetching in JavaScript. When a seek
occurs, I look at currentTime to see where we are seeking to. I then look at
the CUES index data I've fetched to find the file offset for the closest
seek point to the desired time. The appropriate data is fetched and pushed
into the element via append(). The seeked event firing and readyState
transitioning to HAVE_FUTURE_DATA or HAVE_ENOUGH_DATA tells me when I've
sent the element enough data. During playback I just monitor the buffered
attribute to keep a specific duration ahead of the current playback time.


More information about the whatwg mailing list