[whatwg] Limit on number of parallel Workers.
jam at google.com
Wed Jun 10 15:01:34 PDT 2009
The current thinking would be a smaller limit per page (i.e. includes all
iframes and external scripts), say around 16 workers. Then a global limit
for all loaded pages, say around 64 or 128. The benefit of two limits is to
reduce the chance of pages behaving differently depending on what other
sites are currently loaded.
We plan on increasing these limits by a fair amount once we are able to run
multiple JS threads in a process. It's just that even when we do that,
we'll still want to have some limits, and we wanted to use the same approach
On Wed, Jun 10, 2009 at 2:56 PM, Robert O'Callahan <robert at ocallahan.org>wrote:
> On Thu, Jun 11, 2009 at 5:24 AM, Drew Wilson <atwilson at google.com> wrote:
>> That's a great approach. Is the pool of OS threads per-domain, or per
>> browser instance (i.e. can a domain DoS the workers of other domains by
>> firing off several infinite-loop workers)? Seems like having a per-domain
>> thread pool is an ideal solution to this problem.
> You probably still want a global limit, or else malicious sites can DoS
> your entire OS by spawning workers in many synthetic domains. Making the
> limit per-eTLD instead of per-domain would help a bit, but maybe not very
> much. Same goes for other kinds of resources; there's no really perfect
> solution to DoS attacks against browsers, AFAICT.
> "He was pierced for our transgressions, he was crushed for our iniquities;
> the punishment that brought us peace was upon him, and by his wounds we are
> healed. We all, like sheep, have gone astray, each of us has turned to his
> own way; and the LORD has laid on him the iniquity of us all." [Isaiah
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the whatwg