[whatwg] Some questions and ideas about the "Speech for HTML Input Elements" proposal.
James Su
suzhe at google.com
Fri Jun 18 17:36:19 PDT 2010
Hi all,
I just read through the proposal roughly. I like the proposal very much. I
just have some questions and ideas about it.
1. I'm thinking about the possibility of a UA to offload the speech
recognition task to an external local service or application, such as an
input method or a text service or even a browser extension. I'm just
wondering if the user interaction flow would still be same when it happens
outside the UA. For example if an input method supports voice input, it may
generate voice input results as well as some other events(eg. fake keyboard
or mouse events) during an speech session.
2. Besides just input, is it possible to perform other actions via speech?
For example to activate a button or clear the content in the input element?
It would be cool if some rules can be defined to trigger different actions
or even javascript callbacks by speech.
3. How to manage the speech input focus? What will happen if there are
multiple elements accept speech in a page? Is it possible to traverse among
them only by speech?
4. Is it possible to extends this proposal to other input mechanism? Like
handwriting or even visual(gesture) recognition input? Even if it's not
necessary for now, we may need to consider the potential impact when we want
to add this kind of thing in the future.
5. I'm just wondering if it's better to use the speech related properties as
hints to the UA rather than requirements. It should be ok for a UA to
provide speech input feature for input elements without the speech property
and for another UA to simply ignore those properties.
Regards
James Su
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20100618/906c9b6c/attachment-0002.htm>
More information about the whatwg
mailing list