[whatwg] Limit on number of parallel Workers.

Jonas Sicking jonas at sicking.cc
Wed Jun 10 13:46:02 PDT 2009


On Tue, Jun 9, 2009 at 7:07 PM, Michael Nordman<michaeln at google.com> wrote:
>>
>> This is the solution that Firefox 3.5 uses. We use a pool of
>> relatively few OS threads (5 or so iirc). This pool is then scheduled
>> to run worker tasks as they are scheduled. So for example if you
>> create 1000 worker objects, those 5 threads will take turns to execute
>> the initial scripts one at a time. If you then send a message using
>> postMessage to 500 of those workers, and the other 500 calls
>> setTimeout in their initial script, the same threads will take turns
>> to run those 1000 tasks (500 message events, and 500 timer callbacks).
>>
>> This is somewhat simplified, and things are a little more complicated
>> due to how we handle synchronous network loads (during which we freeze
>> and OS thread and remove it from the pool), but the above is the basic
>> idea.
>>
>> / Jonas
>
> Thats a really good model. Scalable and degrades nicely. The only problem is
> with very long running operations where a worker script doesn't return in a
> timely fashion. If enough of them do that, all others starve. What does FF
> do about that, or in practice do you anticipate that not being an issue?
> Webkit dedicates an OS thread per worker. Chrome goes even further (for now
> at least) with a process per worker. The 1:1 mapping is probably overkill as
> most workers will probably spend most of their life asleep just waiting for
> a message.

We do see it as a problem, but not big enough of a problem that we
needed to solve it in the initial version.

It's not really a problem for most types of calculations, as long as
the number of threads is larger than the number of cores we'll still
finish all tasks as quickly as the CPU is able to. Even for long
running operations, if it's operations that the user wants anyway, it
doesn't really matter if the jobs are running all in parallel, or
staggered after each other. As long as you're keeping all CPU cores
busy.

There are some scenarios which it doesn't work so well for. For
example a worker that works more or less infinitely and produces more
and more accurate results the longer it runs. Or something like a
folding at home website which performs calculations as long as the user
is on a website and submits them to the server.

If enough of those workers are scheduled it will block everything else.

This is all solveable of course, there's a lot of tweaking we can do.
But we figured we wanted to get some data on how people use workers
before spending too much time developing a perfect scheduling
solution.

/ Jonas



More information about the whatwg mailing list