[whatwg] Apple Proposal for Timed Media Elements
Ian Hickson
ian at hixie.ch
Fri Oct 12 14:54:51 PDT 2007
On Wed, 21 Mar 2007, Robert Sayre wrote:
>
> My two cents: we should put off events and other API pieces that address
> editing applications. It is possible to write web versions of things
> like iMovie and SoundEdit in Flash right now, but I don't think it is
> realistic to capture that stuff in a first effort. We should focus on
> playback and consumption for v1. So my question for any proposal right
> now would be: "why is the feature needed for something analogous to a
> VCR or YouTube screen?"
Agreed.
> > > For <audio> in general, there's been very little demand for <audio>
> > > other than from people suggesting that it makes abstract logical
> > > sense
>
> I disagree. It's been pointed out by multiple people that <video> will
> be used for audio. That could be quite likely if the page authors wants
> to send ogg vorbis audio.
We have <audio> now.
> > > * What's the use case for hasAudio or hasVideo? Wouldn't the author
> > > know
> > > ahead of time whether the content has audio or video?
> >
> > That depends. If you are displaying one fixed piece of media, then
> > sure. If you are displaying general user-selectable content...
>
> This reasoning seems sound to me. In general, I am weary of proposals
> that require control over both sides of the wire to be effective.
Right now you can tell if you have video content by checking the
videoWidth and videoHeight attributes. There is no equivalent for audio.
> > We have included a mechanism for static fallback based on container
> > type and codec, so that it's possible to choose the best video format
> > for a client even if user agent codec support varies.
>
> What existing markup leads us to believe this will be an effective
> method for content negotiation?
<source> gets around this by moving the selection to the client.
On Thu, 22 Mar 2007, Martin Atkins wrote:
>
> To me, the distinction between the <audio> element and the Audio object
> is that the former has a "place" in the document where that audio
> content logically belongs, while the former is more of a global trigger
> for web application sound effects.
>
> <audio> could, for example, be rendered in-line with surrounding text in
> an aural browser. A visual browser would presumably provide some kind of
> representation in the document of the audio which the user can interact
> with.
>
> In other words, <audio> should be like <img> for sound.
>
> Of course, what the visual representation of <audio> should be is not an
> easy decision. It's even harder than <video>, because there's no
> inherent visual content to overlay a UI on top of.
I have tried to make the spec reflect this.
On Thu, 22 Mar 2007, Maciej Stachowiak wrote:
>
> I generally agree, but note that new Image() makes an <img> element, so
> new Audio() could work analogously.
Yes, this is what the spec now says.
> I think <audio> is useful for foreground/semantic audio, as opposed to
> purely presentational sound effects, because non-browser tools analyzing
> a document would have a harder time finding audio referenced only from
> script. (Imagine a most-linked MP3s on the web feature in a search
> engine.)
Maybe.
> > Of course, what the visual representation of <audio> should be is not
> > an easy decision. It's even harder than <video>, because there's no
> > inherent visual content to overlay a UI on top of.
>
> I think it would be no visual representation by default with no
> controller, and just controls otherwise.
That's what the spec now says, I believe.
--
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
More information about the whatwg
mailing list