[whatwg] Real-time thread support for workers
Jussi Kalliokoski
jussi.kalliokoski at gmail.com
Tue Oct 30 03:39:22 PDT 2012
On Sat, Oct 27, 2012 at 3:14 AM, Ian Hickson <ian at hixie.ch> wrote:
> On Thu, 9 Aug 2012, Jussi Kalliokoski wrote:
> >
> > On W3C AudioWG we're currently discussing the possibility of having web
> > workers that run in a priority/RT thread. This would be highly useful
> > for example to keep audio from glitching even under high CPU stress.
> >
> > Thoughts? Is there a big blocker for this that I'm not thinking about or
> > has it just not been discussed yet? (I tried to search for it, but
> > didn't find anything)
>
> I think it's impractical to give Web authors this kind of control. User
> agents should be able to increase the priority of threads, or notice a
> thread is being used for audio and start limiting its per-slice CPU but
> increasing the frequency of its slices, but that should be up to the UA,
> we can't possibly let Web authors control this, IMHO.
>
You're right, I agree. I think the user agent should stay on top of the
situation and monitor the CPU usage and adjust the priority accordingly.
However, I think the feasible options for getting the benefits of high
priority when needed are either a) that we treat the priority the developer
asks a request rather than a command, or b) the user agent detects the
intent (in the case of audio I think it'd be fairly simple right now) and
decides a suitable priority while adjusting it if necessary. To me, b)
seems like the best approach to take, although both approaches have the
advantage that they don't guarantee anything and are thus more amendable.
> On Thu, 9 Aug 2012, Jussi Kalliokoski wrote:
> >
> > Yes, this is something I'm worried about as well. But prior work in
> > native applications suggests that high priority threads are hardly ever
> > abused like that.
>
> Native apps and Web apps aren't comparable. Native apps that the user has
> decided to install also don't arbitrarily reformat the user's disk or
> install key loggers, but I hope you agree that we couldn't let Web authors
> do those things.
>
> The difference between native apps and Web apps is that users implicitly
> trust native app authors, and therefore are (supposed to be) careful about
> what software they install. However, on the Web, users do not have to be
> (anywhere near as) careful, and so they follow arbitrary links. Trusted
> sites get hijacked by hostile code, users get phished to hostile sites,
> trolls point users on social networks at hostile sites. Yet, when all is
> working as intended (i.e. modulo security bugs), the user is not at risk
> of their machine being taken down.
>
> If we allow sites to use 100% CPU on a realtime thread, then this changes,
> because untrusted hostile sites actually _can_ cause harm.
>
Very true, it can indeed be used to cause harm, and we should not allow
that. I ignored this because I was thinking about attack vectors as a
bidirectional thing (someone loses, someone gains), and I couldn't think of
a way the attacker would benefit from freezing random users' computing
devices. But this approach admittedly doesn't work that well on the web.
> The way the Web platform normally gets around this is by having the Web
> author describe to the UA what the author wants, declaratively, and then
> having the UA take care of it without running author code. This allows the
> UA to make sure it can't be abused, while still having good performance or
> security or whatnot. In the case of Web audio, the way to get sub-80ms
> latency would be say "when this happens (a click, a collision), do this
> (a change in the music, a sound effect)". This is non-trivial to specify,
> but wouldn't run the risk of hostile sites harming the user.
Indeed it is non-trivial to specify, and while the Web Audio API attempts
to do this, it can't possibly cover all use cases without custom processing
in place, the spec is already huge and only addresses a limited set of use
cases efficiently. When the use case requires the developer to do custom
processing, it shouldn't cause the developer to lose all the advantage from
having the rest of the audio graph in a real time thread. Currently it
does, because the JS processing runs in a non-RT thread, and the results
would be too unpredictable if the RT audio thread waited for the non-RT JS
thread, so the current approach is to instead buffer up the input for JS,
send it in for processing and apply it for the next round, which means that
adding a custom processing node to the graph introduces the latency of at
least the buffer size of that custom processing node.
Cheers,
Jussi
More information about the whatwg
mailing list