[whatwg] Limit on number of parallel Workers.

Jeremy Orlow jorlow at chromium.org
Tue Jun 9 18:36:32 PDT 2009


On Tue, Jun 9, 2009 at 6:28 PM, Oliver Hunt <oliver at apple.com> wrote:

> I believe that this will be difficult to have such a limit as sites may
> rely on GC to collect Workers that are no longer running (so number of
> running threads is
> non-deterministic), and in the context of mix source content ("mash-ups") it
> will be difficult for any content source to be sure it isn't going to
> contribute to that
> limit.  Obviously a UA shouldn't crash, but i believe that it is up to the UA to determine how to achieve this -- eg. having a limit to allow a 1:1 relationship between workers and processes will have a much lower limit than an implementation that has a worker per thread model, or an m:n relationship between workers and threads/processes.
>  Having the specification limited simply because one implementation
> mechanism has certain limits when there are many alternative implementation
> models seems like a bad idea.
>

Where in his email does Dmitry advocate upper limits?


> I believe if there's going to be any worker related
> limits, it should realistically be a lower limit on the number of workers rather than an upper.
>

Perhaps lower limits on how many workers are 'guaranteed' to be available
would be good, but that's fairly orthogonal to the original email.  What
he's proposing is a way to gracefully rate limit the number of workers
rather than having the OS running out of resources rate limit it.

I for one like the proposal and the analogy to what happens when you issue
10,000 XHRs at once.

J


On Jun 9, 2009, at 6:13 PM, Dmitry Titov wrote:
>
> Hi WHATWG!
>
> In Chromium, workers are going to have their separate processes, at least
> for now. So we quickly found that "while(true) foo = new Worker(...)"
> quickly consumes the OS resources :-) In fact, this will kill other browsers
> too, and on some systems the unbounded number of threads will effectively
> "freeze" the system beyond the browser.
>
> We think about how to reasonably place limits on the resources consumed by
> 'sea of workers'. Obviously, one could just limit a maxumum number of
> parallel workers available to page or domain or browser. But what do you do
> when a limit is reached? The Worker() constructor could return null or throw
> exception. However, that seems to go against the spirit of the spec since it
> usually does not deal with resource constraints. So it makes sense to look
> for the most sensible implementation that tries best to behave.
>
> Current idea is to let create as many Worker objects as requested, but not
> necessarily start them right away. So the resources are not allocated except
> the thin JS wrapper. As long as workers terminate and the number of them
> decreases below the limit, more workers from the "ready queue" could be
> started. This allows to support implementation limits w/o exposing them.
>
> This is similar to how a 'sea of XHRs' would behave. The test page here<http://www.figushki.com/test/xhr/xhr10000.html> creates
> 10,000 async XHR requests to distinct URLs and then waits for all of them to
> complete. While it's obviosuly impossible to have 10K http connections in
> parallel, all XHRs will be completed, given time.
>
> Does it sound like a good way to avoid the resource crunch due to high
> number of workers?
>
> Thanks,
> Dmitry
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090609/a2031e08/attachment-0002.htm>


More information about the whatwg mailing list