[whatwg] Limit on number of parallel Workers.

Jonas Sicking jonas at sicking.cc
Tue Jun 9 18:29:12 PDT 2009


On Tue, Jun 9, 2009 at 6:13 PM, Dmitry Titov<dimich at chromium.org> wrote:
> Hi WHATWG!
> In Chromium, workers are going to have their separate processes, at least
> for now. So we quickly found that "while(true) foo = new Worker(...)"
> quickly consumes the OS resources :-) In fact, this will kill other browsers
> too, and on some systems the unbounded number of threads will effectively
> "freeze" the system beyond the browser.
> We think about how to reasonably place limits on the resources consumed by
> 'sea of workers'. Obviously, one could just limit a maxumum number of
> parallel workers available to page or domain or browser. But what do you do
> when a limit is reached? The Worker() constructor could return null or throw
> exception. However, that seems to go against the spirit of the spec since it
> usually does not deal with resource constraints. So it makes sense to look
> for the most sensible implementation that tries best to behave.
> Current idea is to let create as many Worker objects as requested, but not
> necessarily start them right away. So the resources are not allocated except
> the thin JS wrapper. As long as workers terminate and the number of them
> decreases below the limit, more workers from the "ready queue" could be
> started. This allows to support implementation limits w/o exposing them.
> This is similar to how a 'sea of XHRs' would behave. The test page
> here creates 10,000 async XHR requests to distinct URLs and then waits for
> all of them to complete. While it's obviosuly impossible to have 10K http
> connections in parallel, all XHRs will be completed, given time.
> Does it sound like a good way to avoid the resource crunch due to high
> number of workers?

This is the solution that Firefox 3.5 uses. We use a pool of
relatively few OS threads (5 or so iirc). This pool is then scheduled
to run worker tasks as they are scheduled. So for example if you
create 1000 worker objects, those 5 threads will take turns to execute
the initial scripts one at a time. If you then send a message using
postMessage to 500 of those workers, and the other 500 calls
setTimeout in their initial script, the same threads will take turns
to run those 1000 tasks (500 message events, and 500 timer callbacks).

This is somewhat simplified, and things are a little more complicated
due to how we handle synchronous network loads (during which we freeze
and OS thread and remove it from the pool), but the above is the basic
idea.

/ Jonas


More information about the whatwg mailing list