[whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)
Jesús Ruiz García
jesusruiz2007 at gmail.com
Mon Jun 25 15:24:37 PDT 2012
Thank you for your answer Tab Atkins.
Indeed, as you say, I see this as a proposal for the future. getUserMedia is
perfect for these features, as you indicate.
I will be alert to any news that occurs on this subject, for if I can
I think other proposals that may help improve the websites.
A greeting and thanks.
2012/6/25 Tab Atkins Jr. <jackalmage at gmail.com>
> On Mon, Jun 25, 2012 at 9:10 AM, Jesús Ruiz García
> <jesusruiz2007 at gmail.com> wrote:
> > I start indicating that this message can be considered useless. I
> > for this.
> > A few weeks ago I was in the chat #WHATWG, and asked how to send an email
> > I've taken a few days before sending this email, because I have been
> > investigating whether there was a similar project in production, and I've
> > seen one.
> > My proposal for HTML5 is to make it functional with Kinect, SoftKinetic,
> > Asus Xtion, and similar devices to interact with the web.
> > Logically, Kinect is the device most commonly used, would be ideal for
> > proposal.
> > Kinect patent must be owned by Microsoft. I am informed that in HTML5,
> > there have been discussions on these issues of patents, so this aspect
> > could possibly be some kind of problem.
> > From my point of view, it would sell more devices of this type. Surely
> > in future be replaced by webcams these devices more powerful and included
> > as standard on all computers.
> > Also, users would have an advance on the web. Because really, I mean just
> > to give you support to make gestures to browse the web, but for more
> > things.
> > I have some functions to the web, and see that are not being developed:
> > *- Online shopping or Online retailing:* Want to buy clothes, but do not
> > know what size you are using actually. Online stores may have an option
> > run Kinect and scan your body to tell you the correct size for you, for
> > that article.We could even see if that shirt looks good on you or not.
> > *- Webs makeup/hair salon:* With face recognition, could learn/test
> > different makeup on the market. Obviously these products would be tested
> > virtually, and then could be purchased.
> > *- Webs fitness/rehabilitation:* While this can be considered as a
> > videogame, I see it more as an application. It would check if the person
> > performing well exercise, getting not cause any injury. Rhythm of
> > exercise, and progress in their mobility.
> > *- Possible support for Canvas:* Interact with Canvas, via Kinect.
> > this can be done also with multitouch technology.
> > There are many ideas, but these are four simple possibilities that have
> > been happening while I was writing this text.
> > It Allows Any Web page to interact With the Microsoft Kinect using
> > https://github.com/doug/depthjs
> > A few months ago that they do not update, and so far only allow web
> > browsing via gestures. I suppose that at the moment, not perform a body
> > scan to display it on screen in the browser.
> > Microsoft according to some reports, also being developed for Xbox 360, a
> > version of Internet Explorer with supports Kinect.
> > Well, with this information, you can become a situation of my proposal.
> > I apologize as said at the beginning of this text, if this proposal is
> > absurd, or not is functional in the HTML5 philosophy, and is better to
> > a separate library as *DepthJS* for this.
> > I hope to know your opinion and read your comments.
> The ability to capture sound and video from the user's devices and
> manipulate it in the page is already being exposed by the getUserMedia
> function. Theoretically, a Kinect can provide this information.
> More advanced functionality like Kinect's depth information probably
> needs more study and experience before we start thinking about adding
> it to the language itself.
More information about the whatwg