[whatwg] Peer-to-peer communication, video conferencing, and related topics (2)

Robert O'Callahan robert at ocallahan.org
Mon Mar 28 22:17:47 PDT 2011


Ian Hickson wrote:

> I agree that (on the long term) we should support stream filters on
> streams, but I'm not sure I understand <video>'s role in this. Wouldn't it
> be more efficient to have something that takes a Stream on one side and
> outputs a Stream on the other, possibly running some native code or JS in
> the middle?


We could.

I'm trying to figure out how this is going to fit in with audio APIs. Chris
Rogers from Google is proposing a graph-based audio API to the W3C Audio
incubator group which would overlap considerably with a Stream processing
API like you're suggesting (although in his proposal processing nodes, not
streams, are first-class).

A fundamental problem here is that HTML media elements have the
functionality of both sources and sinks. You want to see <video> and <audio>
only as sinks which accept streams. But in that case, if we create an audio
processing API using Streams, we'll need a way to download stream data for
processing that doesn't use <audio> and <video>, which means we'll need to
replicate <src> elements, the type attribute, networkstates, readystates,
possibly the 'loop' attribute... should we introduce a new object or element
that provides those APIs? How much can be shared with <video> and <audio>?
Should we be trying to share? (In Chris Rogers' proposal, <audio> elements
are used as sources, not sinks.)

Rob


More information about the whatwg mailing list