[whatwg] Real-time thread support for workers

Jussi Kalliokoski jussi.kalliokoski at gmail.com
Thu Aug 9 09:46:13 PDT 2012


On Thu, Aug 9, 2012 at 6:09 PM, David Bruant <bruant.d at gmail.com> wrote:

>  Le 09/08/2012 09:59, Jussi Kalliokoski a écrit :
>
> Hello David,
>
> Hi Jussi,
>
> On Thu, Aug 9, 2012 at 3:54 PM, David Bruant <bruant.d at gmail.com> wrote:
>
>  * The last source is your own content competing with itself for CPU.
>>
>
> *snip*
>
>
>> One question I have is whether different parts of your own content (like
>> different workers) should declare which priority should be given or whether
>> the application should be written in a way that is resistant to high CPU
>> stress (e.g. doing few work besides audio work).
>>
>
> I'm sorry, not entirely sure I follow... :)
>
> No worries, it wasn't really clear, I admit :-)
> Your proposal draws an API between the developer and the system that is
> based on assigning a priority and letting the system judge what to do with
> that. I was suggesting that more (not all, but more) should be put on the
> developer shoulders rather than letting the computer guess.
>

Owkay... How, exactly, do you mean?

>
>
>> Since the only relevant case for priorities is the third one, I'd like to
>> question the relevance of the use case.
>> Is implementing per-browsing-content web worker priority worth the
>> result? Will we be able to really notice an improvement in the audio
>> quality that often?
>>
>
> Yes. Especially in mobile devices it makes a world of difference when for
> example on a single-core phone you have an audio app in the foreground, and
> a Twitter client in the background. If the Twitter client decides to update
> its content, the audio is most likely to glitch and this is most likely not
> the way the user wanted it to go.
>
> We're back to the case of 2 competing content. An API shouldn't be able to
> influence that for the reason cited in the previous message (which you said
> you were worried about)
> I know Firefox is doing work currently to reduce the work done by
> background tabs (short setTimeouts are clamped to 1s when in background for
> instance. There is other work going on).
>

Yes, I was actually referring to a non-web-app Twitter client. Or a web-app
Twitter client in another browser/wrapper. :)


> Prioritizing between background and foreground tasks is an implementation
> issue, not an issue that should requires a web content API IMHO.
>
> Once again, the only use case being discussed here is likely content
> competing against itself for CPU.
>
>
>
>   Here's the discussion thread on AudioWG [1] and a good article
> exploring the subject of interaction between audio and the rest of the
> system [2].
>
> I haven't fully read the AudioWG thread (I will. Meanwhile, if the thread
> addresses my point, can you link to specific messages?),
>

Sure, I'll try to!


> but I have read the article.
> Most points either don't apply to the web or are on the developer
> shoulders already.
> * Blocking
> => Except for a couple of pathological exceptions (alert, prompt, sync
> xhr), JavaScript has a non-blocking model
>
> * Poor worst-case complexity algorithms
> => That's almost fully on developer shoulders. The web platform
> implementers try to avoid such algorithms already (which is a dilemma in
> text-layout algorithms I heard)
>
> * Locking
> => The message passing model has no notion of locking.
>
> * Memory allocation
> => On developer shoulders mostly.
>
> * Invisible things: garbage collection
> => GC could be "controlled" by a priority actually, but this needs to be
> discussed with the JS engine folks.
>
> * page faults
> => You can't do anything against that on the web.
>
> One thing that isn't explicitely written is that when doing audio in C,
> you have shared-memory in threads (hence locking) and my guess is that it's
> a good source of. You however don't have shared memory in JS with web
> workers. Transferables are a good step forward, maybe a better thing to
> discuss would be to move further in that direction.
>
> According to this article, it seems that the web platform is well-suited
> (no lock, no blocking) for audio actually, isn't it?
>

I agree! A lot of people on the AudioWG don't, however.


>   The gain for audio is so significant
>
> Did someone do research on that? Do we have benchmarks, numbers? Or is the
> "significant" hypothetical?
>

I'm not the best person to answer that question, unfortunately. This issue
against Android linked on the thread might be of interest though:
http://code.google.com/p/android/issues/detail?id=3434 . What the issue
shapes up to focus on is having audio in a high-priority real-time thread.
A quick glance seems like the performance difference numbers thrown around
(are they to be trusted?) suggest latencies of even a half a second when
audio is run in a normal-priority thread, which is quite unacceptable
(recent studies show that the brain is able to sync a connection between a
visual cue and a sound up to at maximum 80ms).


>  that a lot of the working group seems to think it's a good idea to have
> a whole lot of (not very modular to be honest) native DSP nodes that can
> run in a priority thread just to get the audio running in a priority
> thread, and I think priority thread workers is a way better idea.
>
>
>> I would be more in favor of browsers sharing with content how busy the
>> CPU is (in a way or another) so that the content shuts down things itself
>> and decides programmatically what is worth running and what isn't.
>>
>
> Yes, that would be ideal. However I fear it's not good enough for audio.
>
> Purely based on the article, it seems that the web platform does a good
> job at helping developers write good real-time code (no blocking, no
> locking, no built-in poor worst-case complexity algorithms). The other
> points (memory allocation, page faults) are either on the developer
> shoulders or at the system level and priority would unlikely help with that
> (if it does, I would be interested in reading the related research on the
> topic). Priority could help with GC (not doing it under pressure), but at
> the same time, GC are undergoing tremedous improvements (incremental GC in
> Chrome and now in Firefox, Generational GC in Chrome and soon in FF)
> lately, so it would need to be proven too that the difference would be that
> substancial.
> Not having shared memory may be a bottleneck. Transferable helps.
>
> All in all, the article you linked to makes me more confident that the web
> is close to be ready for real-time code.
> It would be nice (a requirement?) to see actual research on every
> assumption on how a web worker priority mechanism would improve audio
> quality.
>

Indeed. I'll try and dig if I can find something.

There is actually at least one alternative possibility to the developer
being able to set the thread priority for his/her worker. With the current
audio API suggestions, it's pretty easy for the UA to determine whether a
worker is running audio which means it could automatically adjust the
thread priority accordingly. How does that sound? There's one severe
drawback with that approach though, as it gives no opt-out from running the
worker in a high-priority thread. As I say on the thread, it's not always
desirable for the audio to have priority over everything else, for example
on a single-core computer, running large-kernel convolution in a
high-priority thread can make the whole system virtually unusable. That's
always true for the native DSP node approach as well, which I think is a
bad idea.

Cheers,
Jussi



More information about the whatwg mailing list