[whatwg] Limit on number of parallel Workers.

Drew Wilson atwilson at google.com
Tue Jun 9 18:43:32 PDT 2009


It occurs to me that my statement was a bit stronger than I intended - the
spec *does* indeed make guarantees regarding GC of workers, but they are
fairly loose and typically tied to the parent Document becoming inactive.
-atw

On Tue, Jun 9, 2009 at 6:42 PM, Drew Wilson <atwilson at google.com> wrote:

> This is a bit of an aside, but section 4.5 of the Web Workers spec no
> longer makes any guarantees regarding GC of workers. I would expect user
> agents to make some kind of best effort to detect unreachability in the
> simplest cases, but supporting MessagePorts and SharedWorkers makes
> authoritatively determining worker reachability exceedingly difficult except
> in simpler cases (DedicatedWorkers with no MessagePorts or nested workers,
> for example). It seems like we should be encouraging developers to call
> WorkerGlobalScope.close() when they are done with their
> workers, which in the case below makes the number of running threads less undeterministic.
> Back on topic, I believe what Dmitry was suggesting was not that we specify
> a specific
> limit in the specification, but rather we have some sort of general agreement on how a UA might handle limits (what should it do when the limit is reached).
> His suggestion of delaying the startup of the worker seems like a better
> solution than other approaches like throwing an exception on the Worker
> constructor.
>
> -atw
>
> On Tue, Jun 9, 2009 at 6:28 PM, Oliver Hunt <oliver at apple.com> wrote:
>
>> I believe that this will be difficult to have such a limit as sites may
>> rely on GC to collect Workers that are no longer running (so number of
>> running threads is
>> non-deterministic), and in the context of mix source content ("mash-ups") it
>> will be difficult for any content source to be sure it isn't going to
>> contribute to that
>> limit.  Obviously a UA shouldn't crash, but i believe that it is up to the UA to determine how to achieve this -- eg. having a limit to allow a 1:1 relationship between workers and processes will have a much lower limit than an implementation that has a worker per thread model, or an m:n relationship between workers and threads/processes.
>>  Having the specification limited simply because one implementation
>> mechanism has certain limits when there are many alternative implementation
>> models seems like a bad idea.
>> I believe if there's going to be any worker related
>> limits, it should realistically be a lower limit on the number of workers rather than an upper.
>>
>> --Oliver
>>
>>
>> On Jun 9, 2009, at 6:13 PM, Dmitry Titov wrote:
>>
>> Hi WHATWG!
>>
>> In Chromium, workers are going to have their separate processes, at least
>> for now. So we quickly found that "while(true) foo = new Worker(...)"
>> quickly consumes the OS resources :-) In fact, this will kill other browsers
>> too, and on some systems the unbounded number of threads will effectively
>> "freeze" the system beyond the browser.
>>
>> We think about how to reasonably place limits on the resources consumed by
>> 'sea of workers'. Obviously, one could just limit a maxumum number of
>> parallel workers available to page or domain or browser. But what do you do
>> when a limit is reached? The Worker() constructor could return null or throw
>> exception. However, that seems to go against the spirit of the spec since it
>> usually does not deal with resource constraints. So it makes sense to look
>> for the most sensible implementation that tries best to behave.
>>
>> Current idea is to let create as many Worker objects as requested, but not
>> necessarily start them right away. So the resources are not allocated except
>> the thin JS wrapper. As long as workers terminate and the number of them
>> decreases below the limit, more workers from the "ready queue" could be
>> started. This allows to support implementation limits w/o exposing them.
>>
>> This is similar to how a 'sea of XHRs' would behave. The test page here<http://www.figushki.com/test/xhr/xhr10000.html> creates
>> 10,000 async XHR requests to distinct URLs and then waits for all of them to
>> complete. While it's obviosuly impossible to have 10K http connections in
>> parallel, all XHRs will be completed, given time.
>>
>> Does it sound like a good way to avoid the resource crunch due to high
>> number of workers?
>>
>> Thanks,
>> Dmitry
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090609/baf4d97a/attachment-0002.htm>


More information about the whatwg mailing list