[whatwg] Stream API Feedback

Olli Pettay Olli.Pettay at helsinki.fi
Wed Mar 16 11:29:37 PDT 2011


On 03/16/2011 05:36 PM, Lachlan Hunt wrote:
> On 2011-03-15 21:58, Robert O'Callahan wrote:
>> Instead of creating new state signalling and control API for streams,
>> what
>> about the alternative approach of letting<video> and<audio> use
>> sensors as
>> sources, and a way to connect the output of<video> and<audio> to
>> encoders?
>
> I'm not entirely sure I understand your proposal, but are you suggesting
> that the input streams from the camera/microphone would first go to
> <video> and <audio> elements, allowing the existing HTMLMediaElement API
> on those elements to be used to control those streams, the output of
> which can then be encoded and recorded to a file or streamed remotely?

I think roc did suggest that.
Perhaps navigator.getUserMedia("audio,video", success, error);
could return an url to the device in the success callback, and that url
could be then set to video.src.


>
> I'm not so sure that would be ideal. The state machinary, assuming you
> mean the networkState, readyState and their associated constants, are
> clearly designed and optimised for obtaining media over a network and do
> not map so well to obtaining streams directly from local devices.
I'd guess reading from local devices would be very similar to reading
video data from local files (which browsers already support).


>
> Many other properties, such as duration, playbackRate, etc. also do not
> have much meaning in the context of streaming media. Some, like
> currentTime, only have limited applicability to streams as it can tell
> you how long its played for, but must be effectively readonly as seeking
> is not possible.
Well, that is the case already with streamed video.



> But that's not particularly useful for the audio element. It's rare that
> the user would want their microphone input to be echoed back to them via
> an audio element. In most cases, when a microphone stream is input into
> an audio element, the audio element itself would need to be muted to
> prevent unwanted and annoying echo or, worse, feedback loops. That would
> only be useful if the audio data were being analysed and output, for
> example, to an audio spectrum visualisation (like with Mozilla's
> experimental audio data API).
Audio (and video) data could be modified before encoding and
streaming it using PeerConnection. That way one could for example
reduce background noise from the audio stream, or 'crop' the video
before sending it. Or if the camera doesn't support grayscale,
the web page could convert the colorful video to grayscale in
order to save network bandwidth.



-Olli



More information about the whatwg mailing list