[whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)
Jesús Ruiz García
jesusruiz2007 at gmail.com
Thu Jun 28 06:42:27 PDT 2012
One problem that I think that can happen is that there are no official
drivers for Linux and MAC.
Microsoft should give a solution to this. Although I found that there is a
project called OpenKinect that seems to have advanced work.
However as mentioned, to support Kinect and similar devices should not be a
A greeting ;)
2012/6/27 Silvia Pfeiffer <silviapfeiffer1 at gmail.com>
> On Wed, Jun 27, 2012 at 1:56 PM, Robert O'Callahan <robert at ocallahan.org>
> > On Tue, Jun 26, 2012 at 8:22 AM, Tab Atkins Jr. <jackalmage at gmail.com
> >> The ability to capture sound and video from the user's devices and
> >> manipulate it in the page is already being exposed by the getUserMedia
> >> function. Theoretically, a Kinect can provide this information.
> >> More advanced functionality like Kinect's depth information probably
> >> needs more study and experience before we start thinking about adding
> >> it to the language itself.
> > If we were going to support anything like this, I think the best approach
> > would be to have a new track type that getUserMedia can return in a
> > MediaStream, containing depth buffer data.
> I agree.
> Experimentation with this in a non-live manner is already possible by
> using a @kind="metadata" track and putting the Kinect's depth
> information into a WebVTT file to use in parallel with the video.
> WebM has further defined how to encapsulate WebVTT into a WebM text
> track , so you could even put this information into a video file.
> I believe the same is possible with MPEG .
> The exact format for how the Kinect's depth information is delivered
> as a timed metadata track would need to be specified before it could
> turn into its own @kind track type and deliver it live.
>  http://html5.cablelabs.com/tracks/media-container-mapping.html
More information about the whatwg