[whatwg] Limit on number of parallel Workers.
michaeln at google.com
Wed Jun 10 14:11:20 PDT 2009
On Wed, Jun 10, 2009 at 1:46 PM, Jonas Sicking <jonas at sicking.cc> wrote:
> On Tue, Jun 9, 2009 at 7:07 PM, Michael Nordman<michaeln at google.com>
> >> This is the solution that Firefox 3.5 uses. We use a pool of
> >> relatively few OS threads (5 or so iirc). This pool is then scheduled
> >> to run worker tasks as they are scheduled. So for example if you
> >> create 1000 worker objects, those 5 threads will take turns to execute
> >> the initial scripts one at a time. If you then send a message using
> >> postMessage to 500 of those workers, and the other 500 calls
> >> setTimeout in their initial script, the same threads will take turns
> >> to run those 1000 tasks (500 message events, and 500 timer callbacks).
> >> This is somewhat simplified, and things are a little more complicated
> >> due to how we handle synchronous network loads (during which we freeze
> >> and OS thread and remove it from the pool), but the above is the basic
> >> idea.
> >> / Jonas
> > Thats a really good model. Scalable and degrades nicely. The only problem
> > with very long running operations where a worker script doesn't return in
> > timely fashion. If enough of them do that, all others starve. What does
> > do about that, or in practice do you anticipate that not being an issue?
> > Webkit dedicates an OS thread per worker. Chrome goes even further (for
> > at least) with a process per worker. The 1:1 mapping is probably overkill
> > most workers will probably spend most of their life asleep just waiting
> > a message.
> We do see it as a problem, but not big enough of a problem that we
> needed to solve it in the initial version.
> It's not really a problem for most types of calculations, as long as
> the number of threads is larger than the number of cores we'll still
> finish all tasks as quickly as the CPU is able to. Even for long
> running operations, if it's operations that the user wants anyway, it
> doesn't really matter if the jobs are running all in parallel, or
> staggered after each other. As long as you're keeping all CPU cores
> There are some scenarios which it doesn't work so well for. For
> example a worker that works more or less infinitely and produces more
> and more accurate results the longer it runs. Or something like a
> folding at home website which performs calculations as long as the user
> is on a website and submits them to the server.
> If enough of those workers are scheduled it will block everything else.
> This is all solveable of course, there's a lot of tweaking we can do.
> But we figured we wanted to get some data on how people use workers
> before spending too much time developing a perfect scheduling
I never did like the Gears model (1:1 mapping with a thread). We were stuck
with a strong thread affinity due to other constraints (script engines,
But we could have allowed multiple workers to reside in a single thread.
A thread pool (perhaps per origin) sort of arrangement, where once a worker
was put on a particular thread it stayed there until end-of-life.
Your FF model has more flexibility. Give a worker a slice
(well where slice == run-to-completion) on any thread in the
pool, no thread affinity whatsoever (if i understand correctly).
> / Jonas
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the whatwg