[whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

Silvia Pfeiffer silviapfeiffer1 at gmail.com
Wed Jun 27 10:22:55 PDT 2012


On Wed, Jun 27, 2012 at 1:56 PM, Robert O'Callahan <robert at ocallahan.org> wrote:
> On Tue, Jun 26, 2012 at 8:22 AM, Tab Atkins Jr. <jackalmage at gmail.com>wrote:
>
>> The ability to capture sound and video from the user's devices and
>> manipulate it in the page is already being exposed by the getUserMedia
>> function.  Theoretically, a Kinect can provide this information.
>>
>> More advanced functionality like Kinect's depth information probably
>> needs more study and experience before we start thinking about adding
>> it to the language itself.
>>
>
> If we were going to support anything like this, I think the best approach
> would be to have a new track type that getUserMedia can return in a
> MediaStream, containing depth buffer data.

I agree.

Experimentation with this in a non-live manner is already possible by
using a @kind="metadata" track and putting the Kinect's depth
information into a WebVTT file to use in parallel with the video.

WebM has further defined how to encapsulate WebVTT into a WebM text
track [1], so you could even put this information into a video file.
I believe the same is possible with MPEG [2].

The exact format for how the Kinect's depth information is delivered
as a timed metadata track would need to be specified before it could
turn into its own @kind track type and deliver it live.


Cheers,
Silvia.
[1] http://wiki.webmproject.org/webm-metadata/temporal-metadata/webvtt-in-webm
[2] http://html5.cablelabs.com/tracks/media-container-mapping.html



More information about the whatwg mailing list