[whatwg] Limit on number of parallel Workers.
Oliver Hunt
oliver at apple.com
Tue Jun 9 18:28:04 PDT 2009
I believe that this will be difficult to have such a limit as sites
may rely on GC to collect Workers that are no longer running (so
number of running threads is non-deterministic), and in the context of
mix source content ("mash-ups") it will be difficult for any content
source to be sure it isn't going to contribute to that limit.
Obviously a UA shouldn't crash, but i believe that it is up to the UA
to determine how to achieve this -- eg. having a limit to allow a 1:1
relationship between workers and processes will have a much lower
limit than an implementation that has a worker per thread model, or an
m:n relationship between workers and threads/processes. Having the
specification limited simply because one implementation mechanism has
certain limits when there are many alternative implementation models
seems like a bad idea.
I believe if there's going to be any worker related limits, it should
realistically be a lower limit on the number of workers rather than an
upper.
--Oliver
On Jun 9, 2009, at 6:13 PM, Dmitry Titov wrote:
> Hi WHATWG!
>
> In Chromium, workers are going to have their separate processes, at
> least for now. So we quickly found that "while(true) foo = new
> Worker(...)" quickly consumes the OS resources :-) In fact, this
> will kill other browsers too, and on some systems the unbounded
> number of threads will effectively "freeze" the system beyond the
> browser.
>
> We think about how to reasonably place limits on the resources
> consumed by 'sea of workers'. Obviously, one could just limit a
> maxumum number of parallel workers available to page or domain or
> browser. But what do you do when a limit is reached? The Worker()
> constructor could return null or throw exception. However, that
> seems to go against the spirit of the spec since it usually does not
> deal with resource constraints. So it makes sense to look for the
> most sensible implementation that tries best to behave.
>
> Current idea is to let create as many Worker objects as requested,
> but not necessarily start them right away. So the resources are not
> allocated except the thin JS wrapper. As long as workers terminate
> and the number of them decreases below the limit, more workers from
> the "ready queue" could be started. This allows to support
> implementation limits w/o exposing them.
>
> This is similar to how a 'sea of XHRs' would behave. The test page
> here creates 10,000 async XHR requests to distinct URLs and then
> waits for all of them to complete. While it's obviosuly impossible
> to have 10K http connections in parallel, all XHRs will be
> completed, given time.
>
> Does it sound like a good way to avoid the resource crunch due to
> high number of workers?
>
> Thanks,
> Dmitry
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.whatwg.org/pipermail/whatwg-whatwg.org/attachments/20090609/28995af0/attachment.htm>
More information about the whatwg
mailing list