[whatwg] Scripted querying of <video> capabilities

Tim Starling tstarling at wikimedia.org
Thu Aug 7 04:23:26 PDT 2008


Mikko Rantalainen wrote:
> Tim Starling wrote:
>   
>> Henri Sivonen wrote:
>>     
>>> On Aug 7, 2008, at 09:53, Tim Starling wrote:
>>>
>>>       
>>>>        xiphQtVersion = videoElt.GetComponentVersion('imdc','XiTh',
>>>> 'Xiph');
>>>>         
>>> This kind of FourCC use is exactly the kind of thing I meant earlier
>>> when I asked if the MIME stuff is really the best match for frameworks.
>>>       
>> FourCC and MIME are not mutually exclusive, see RFC 2361.
>>     
>
> RFC 2361 doesn't seem to provide a method for describing both video
> codec and audio codec for a resource. The same parameter ("codec") is
> used for both audio and video codec information but I cannot understand
> how a resource that contains both video and audio should be labeled.
>
> RFC 4281 seems to provide a parameter "codecs" that is a comma separated
> list of used codecs. The detail required ("MUST") for the "codecs"
> parameter seems quite high and I'm afraid that this parameter will not
> be correctly used in real world, unfortunately. In theory, this would
> seem a good way to provide the information needed for a resource.
>
>   

Well, I was thinking of an interface which would list the codecs, rather
than the overall file types. But it could be done either way. By file type:

if ( video.supportsType( 'application/ogg;codecs="vorbis,theora"' ) ) {
 ...
}

By codec/container:

if ( 'application/ogg' in video.supportedTypes
  && 'video/theora' in video.supportedTypes
  && 'audio/vorbis' in video.supportedTypes )
{
  ...
}

The first one looks easier in this application, and allows for multiple
backends with different allowed combinations of container and codec. The
second one allows for complete enumeration of client capabilities.

Obviously you couldn't provide an interface to enumerate every possible
combination of container, audio codec and video codec.

The reason this is needed, as opposed to using multiple <source> tags,
is because inevitably, some clients will support certain formats via
<object> (or in our special case, <applet>) and not via <video>.
Querying the browser's <video> capabilities allows JS to decide what
sort of embedding method to use. It's an essential migration feature.

-- Tim Starling



More information about the whatwg mailing list