[whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

Jesús Ruiz García jesusruiz2007 at gmail.com
Fri Jun 29 13:35:54 PDT 2012


One last question, if not too much trouble.

Seeing that my proposal has not been completely rejected, could I add this
to the Category: Proposals for the Wiki?:
http://wiki.whatwg.org/wiki/Category:Proposals

What do you think?

A greeting.

2012/6/28 Jesús Ruiz García <jesusruiz2007 at gmail.com>

> One problem that I think that can happen is that there are no official
> drivers for Linux and MAC.
> Microsoft should give a solution to this. Although I found that there is a
> project called OpenKinect that seems to have advanced work.
> http://openkinect.org/wiki/Main_Page
>
> However as mentioned, to support Kinect and similar devices should not be a
> priority actually.
>
> A greeting ;)
>
> 2012/6/27 Silvia Pfeiffer <silviapfeiffer1 at gmail.com>
>
>> On Wed, Jun 27, 2012 at 1:56 PM, Robert O'Callahan <robert at ocallahan.org>
>> wrote:
>> > On Tue, Jun 26, 2012 at 8:22 AM, Tab Atkins Jr. <jackalmage at gmail.com
>> >wrote:
>> >
>> >> The ability to capture sound and video from the user's devices and
>> >> manipulate it in the page is already being exposed by the getUserMedia
>> >> function.  Theoretically, a Kinect can provide this information.
>> >>
>> >> More advanced functionality like Kinect's depth information probably
>> >> needs more study and experience before we start thinking about adding
>> >> it to the language itself.
>> >>
>> >
>> > If we were going to support anything like this, I think the best
>> approach
>> > would be to have a new track type that getUserMedia can return in a
>> > MediaStream, containing depth buffer data.
>>
>> I agree.
>>
>> Experimentation with this in a non-live manner is already possible by
>> using a @kind="metadata" track and putting the Kinect's depth
>> information into a WebVTT file to use in parallel with the video.
>>
>> WebM has further defined how to encapsulate WebVTT into a WebM text
>> track [1], so you could even put this information into a video file.
>> I believe the same is possible with MPEG [2].
>>
>> The exact format for how the Kinect's depth information is delivered
>> as a timed metadata track would need to be specified before it could
>> turn into its own @kind track type and deliver it live.
>>
>>
>> Cheers,
>> Silvia.
>> [1]
>> http://wiki.webmproject.org/webm-metadata/temporal-metadata/webvtt-in-webm
>> [2] http://html5.cablelabs.com/tracks/media-container-mapping.html
>>
>
>



More information about the whatwg mailing list