[whatwg] Scripted querying of <video> capabilities

Jeremy Doig jeremydo at google.com
Thu Aug 7 10:59:18 PDT 2008


how would this work (say) for different avc profile levels and features (eg:
PAFF support) ?would we require video creators to know the specific
capabilities of every fourCC target ?

On Thu, Aug 7, 2008 at 4:23 AM, Tim Starling <tstarling at wikimedia.org>wrote:

> Mikko Rantalainen wrote:
> > Tim Starling wrote:
> >
> >> Henri Sivonen wrote:
> >>
> >>> On Aug 7, 2008, at 09:53, Tim Starling wrote:
> >>>
> >>>
> >>>>        xiphQtVersion = videoElt.GetComponentVersion('imdc','XiTh',
> >>>> 'Xiph');
> >>>>
> >>> This kind of FourCC use is exactly the kind of thing I meant earlier
> >>> when I asked if the MIME stuff is really the best match for frameworks.
> >>>
> >> FourCC and MIME are not mutually exclusive, see RFC 2361.
> >>
> >
> > RFC 2361 doesn't seem to provide a method for describing both video
> > codec and audio codec for a resource. The same parameter ("codec") is
> > used for both audio and video codec information but I cannot understand
> > how a resource that contains both video and audio should be labeled.
> >
> > RFC 4281 seems to provide a parameter "codecs" that is a comma separated
> > list of used codecs. The detail required ("MUST") for the "codecs"
> > parameter seems quite high and I'm afraid that this parameter will not
> > be correctly used in real world, unfortunately. In theory, this would
> > seem a good way to provide the information needed for a resource.
> >
> >
>
> Well, I was thinking of an interface which would list the codecs, rather
> than the overall file types. But it could be done either way. By file type:
>
> if ( video.supportsType( 'application/ogg;codecs="vorbis,theora"' ) ) {
>  ...
> }
>
> By codec/container:
>
> if ( 'application/ogg' in video.supportedTypes
>  && 'video/theora' in video.supportedTypes
>  && 'audio/vorbis' in video.supportedTypes )
> {
>  ...
> }
>
> The first one looks easier in this application, and allows for multiple
> backends with different allowed combinations of container and codec. The
> second one allows for complete enumeration of client capabilities.
>
> Obviously you couldn't provide an interface to enumerate every possible
> combination of container, audio codec and video codec.
>
> The reason this is needed, as opposed to using multiple <source> tags,
> is because inevitably, some clients will support certain formats via
> <object> (or in our special case, <applet>) and not via <video>.
> Querying the browser's <video> capabilities allows JS to decide what
> sort of embedding method to use. It's an essential migration feature.
>
> -- Tim Starling
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20080807/2e29d11e/attachment-0001.htm>


More information about the whatwg mailing list