[whatwg] Proposal for separating script downloads and execution

Kyle Simpson getify at gmail.com
Thu Feb 17 11:38:45 PST 2011


>> Do you have any example where hundreds of megabytes of JavaScript is
>> being loaded onto pages? Even "tens of megabytes" seems quite
>> extraordinary.
>
> Think 10,000 <script> elements all pointing to the same 25KB script.  If 
> you're forced to preload the script at src-set time, that's 25MB of data.
>
> And if the argument is that the scripts can share the data, I don't see 
> what guarantees that.

I don't know of any browsers which are set to download more than 8 parallel 
connections. I can't imagine that you'd have 10,000 separate downloads of 
the same resource. Depending on race conditions, you might get a few extra 
requests in before the item was cached (if it caches). But, once the item is 
in the cache, any additional identical requests for that element should be 
pulling from the cache (from a network-request-level, that is), right? If 
they request a script 10,000 times and the script doesn't cache, or its 
contents are all different, than yeah, that page's resource utilization is 
going to already be crazy abnormally high... and will potentially be 
exacerbated (memory-wise) by the preloading functionality in question.

The question becomes, can the browser create a unique in-memory "cache" 
entry for each distinct script contents' processing, such that each script 
element has a pointer to its appropriate copy of the ready-to-execute script 
contents, without duplication? I can't imagine the browser would need 
separate copies for identical script contents, but perhaps I'm missing 
something that prevents it from doing the uniqueness caching. I know very 
little about the inner workings of the browser, so I won't go any further in 
guessing there.

Even if they can't , what it means is, these edge cases with 10,000 script 
requests might get exponentially bad. But it still seems like the normal 
majority cases will perform roughly the same, if not better. Or am I missing 
something?


> Doing that while obeying HTTP semantics might be pretty difficult if you 
> don't have very low-level control over your network layer.

I'm not sure what you mean by "HTTP semantics" if it isn't about caching. 
But I don't think there'd be any reason that this proposal would be 
suggesting any different handling of resources (from a HTTP semantics, 
network-request-layer perspective) than is already true of script tags. Can 
you elaborate on how the current network-resource handling semantics of 
script tags would create HTTP related issues if preloading were happening?

I wonder how IE is handling this, seemingly without too many issues (since 
they've been doing it forever).

In other words... if I loop through and create 10,000 script elements (no 
DOM append), and that causes 10,000 (or so) requests for that resource... 
how is that different/worse than if I loop through and create 10,000 script 
elements that I append to the DOM? Won't they have roughly the same impact 
on HTTP-layer loading, caching, etc?


> Sure.  That doesn't mean we shouldn't worry about the edge cases.  It 
> might be we decide to ignore them, after careful consideration.  But we 
> should consider them.

OK, fair enough. I agree it's worthwhile to consider them. Sorry if I 
overreacted too strongly to you bringing it up.


> Yes, and I'm on record saying that I need to think about my users and 
> protecting them from the minority of incompetent or malicious web 
> developers.  We just have slightly different goals here.

Also a fair statement.


>> To my knowledge, that process worked ok, and I think it's a decent model 
>> for going forward.
>
> Just to be clear, that process, on our end, was a huge engineering-time 
> sink.  Several man-months were wasted on it.  We would very much like to 
> avoid having to repeat that experience if at all possible.

It's a shame that it's being viewed as "wasted". With respect to 
"async=false", there was already going to be some site breakage that cropped 
up with Firefox's script-ordering change, regardless of if I had jumped in 
to complain about the LABjs breakage and the proposal for "async=false". 
And, while "async=false" took a lot of extra time to discuss and agree on, 
it sure seemed like it ended up being the solution to all those other sites 
problems. I'm obviously not on the Mozilla team, so perhaps I'm unaware of 
additional burdens that process placed on the team.

I do think it's fair to say that the magnitude of impact for changing script 
ordering semantics and changing script network-loading semantics are not 
really the same. But yes, I'm sure there would be a few sites who broke in 
some random way -- there always is.

Is it better to break a few sites so that many other sites can start 
improving performance? I dunno, but I tend to think it's worth considering 
that tradeoff. I've heard a similar argument made earlier in this thread.


> Does IE obey HTTP semantics for the preloads?  Has anyone does some really 
> careful testing of IE's actual behavior here?

I've done a lot of careful testing of IE's actual behavior here. But I'm not 
sure I know exactly what HTTP semantics I should be looking for. If you 
would be able to share some specific questions to probe about IE's 
implementation/behavior, I'm more than happy to extend my existing tests to 
figure out the answers.

To be clear, if we find a significant flaw in IE's implementation, and that 
sheds new light on this whole proposal process, I'm all for finding it. But 
in my testing thus far, using this feature as prescribed (in a script loader 
use-case), the functionality IE has seems quite sane and useful.


--Kyle

 



More information about the whatwg mailing list