[whatwg] Timed tracks: feedback compendium
Odin Omdal Hørthe
odin.omdal at gmail.com
Tue Oct 19 15:59:14 PDT 2010
On Wed, Sep 8, 2010 at 1:19 AM, Ian Hickson <ian at hixie.ch> wrote:
>> [...] You're also excluding roll-on captions then which is a feature of
>> live broadcasting.
> It isn't clear to me that an external file would be a good solution for
> live broadcasting, so I'm not sure this really matters.
The standards-loving Agency for Public Management and eGovernment here
in Norway are getting their eyes up for HTML5 video (like the rest of
the world), and are kicking the tires. I've been streaming many
conferences with Ogg Theora and using Cortado as fallback for legacy
Now it has come to a point that we are required to follow the WAI WACG
requirements. So we have to caption the live video streams/broadcasts.
Given the (not surprising) low support of Timed Tracks for live
streams in browsers, I'm at this point going to burn the text into the
video to be shown. However, that is no good solution long term. When
browsers implement the new startOffsetTime I will be able to send the
(along with the slide images).
However, it would be very nice to be able to send this to the
caption-track, and not having to reimplement a user interface for
choosing to see captions etc (I guess user agents will have that).
Also, I guess there will also be other benefits of streaming directly
as a timed track, such as the user agent knowing what it is (so that
it can do smart things with it).
Accessibility is a quite universal requirement, and it would be very
nice if live streaming could be part of the same framework.
Or what other way is there to text such live conferences; or even
bring real-time metadata from a live video?
Maybe I could even send JSON about the new slides appearing in the
metadata track? Or even send the slides (images) themselves as
data-urls in the track?
Odin Hørthe Omdal <odin.omdal at gmail.com>
More information about the whatwg