[whatwg] Stream API Feedback

Lachlan Hunt lachlan.hunt at lachy.id.au
Wed Mar 16 08:36:04 PDT 2011


On 2011-03-15 21:58, Robert O'Callahan wrote:
> Instead of creating new state signalling and control API for streams, what
> about the alternative approach of letting<video>  and<audio>  use sensors as
> sources, and a way to connect the output of<video>  and<audio> to encoders?

I'm not entirely sure I understand your proposal, but are you suggesting 
that the input streams from the camera/microphone would first go to 
<video> and <audio> elements, allowing the existing HTMLMediaElement API 
on those elements to be used to control those streams, the output of 
which can then be encoded and recorded to a file or streamed remotely?

I'm not so sure that would be ideal.  The state machinary, assuming you 
mean the networkState, readyState and their associated constants, are 
clearly designed and optimised for obtaining media over a network and do 
not map so well to obtaining streams directly from local devices.

Many other properties, such as duration, playbackRate, etc. also do not 
have much meaning in the context of streaming media. Some, like 
currentTime, only have limited applicability to streams as it can tell 
you how long its played for, but must be effectively readonly as seeking 
is not possible.

In fact, of all the properties that are on HTMLMediaElement, the only 
ones that seem to have any real use for streaming media are volume, 
muted, paused and ended.  So I'm not convinced that it's a good idea to 
try and reuse existing APIs simple for the sake of reusing them, when 
they aren't really a good fit.

> Then we'd get all the existing state machinery for free. We'd also get
> sensor input for audio processing (e.g. Mozilla or Chrome's audio APIs), and
> in-page video preview, and using<canvas>  to take snapshots, and more...

We can already do in-page video preview with the existing design.

var v = querySelector("video");
navigator.getUserMedia("video", function(stream) {
   v.src = stream;
});

 From there, taking snapshots with canvas is also possible.  We can in 
fact already do that with what Opera had implemented for the <device> 
element.

But that's not particularly useful for the audio element. It's rare that 
the user would want their microphone input to be echoed back to them via 
an audio element. In most cases, when a microphone stream is input into 
an audio element, the audio element itself would need to be muted to 
prevent unwanted and annoying echo or, worse, feedback loops.  That 
would only be useful if the audio data were being analysed and output, 
for example, to an audio spectrum visualisation (like with Mozilla's 
experimental audio data API).

-- 
Lachlan Hunt - Opera Software
http://lachy.id.au/
http://www.opera.com/



More information about the whatwg mailing list