[whatwg] Speech input element
singer at apple.com
Wed May 19 14:38:38 PDT 2010
I am a little concerned that we are increasingly breaking down a metaphor, a 'virtual interface' without realizing what that abstraction buys us. At the moment, we have the concept of a hypothetical pointer and hypothetical keyboard, (with some abstract states, such as focus) that you can actually drive using a whole bunch of physical modalities. If we develop UIs that are specific to people actually speaking, we have 'torn the veil' of that abstract interface. What happens to people who cannot speak, for example? Or who cannot say the language needed well enough to be recognized?
Multimedia and Software Standards, Apple Inc.
More information about the whatwg