[whatwg] Proposal for separating script downloads and execution

Glenn Maynard glenn at zewt.org
Fri Feb 11 16:31:10 PST 2011


Note that there's still concern that the feature in general hasn't been
justified properly.  In particular, the major real-world example used to
justify this is the Gmail scripts-in-comments hack, and I don't think we
actually know the complete justification for that.  We know it's for mobile
browsers, but it may be for older mobile browsers with much slower
Javascript parsers and not relevant for today's or future browsers (the ones
that would support this), even on mobile devices.

My justification is this: Javascript applications are bigger and more
complex than they used to be, and they'll only get bigger and yet more
complex.  Having codebases several megabytes in size in the future seems a
fair prediction.  Once we get to that point, having browsers parse all of
that at once, no matter how fast parsers are, seems unreasonable; we should
have a solid framework to allow modular codebases, as every other serious
application platform has.  It also seems like it may become very useful to
allow browsers to spend time (whether idle time or otherwise) not just on
parsing but on more expensive optimizations, and having a framework that
gives them access to scripts to do that in advance seems like a very good
idea.  (As timeless pointed out, it may be possible for browsers to work
around the hacks with hacks of their own, such as attempting to extract code
hidden in comments, but I don't think that's a sane way forward.)

Javascript applications generally aren't yet at that size, but I think it's
a fair prediction.  As it takes a long time for anything we're talking about
here to be implemented and deployed, I think it makes sense to not wait
until it actually becomes a problem.

To put forward an opposite argument: browsers caching parsed scripts might
address some of the performance question without any extra API.  Pages would
only have a longer load time the first time they were loaded; pulling a
parsed block of bytecode out of cache should be very fast.

Also, for what it's worth (not much), I ran a simple, very unscientific
benchmark, loading 40 MB of code in Chrome, a list of "function f() { a();
b(); c(); d(); }" functions.  It took about 6 seconds on my desktop, or
about 150ms per megabyte.  That suggests very weakly that on a current
parser on a desktop browser, a 5 MB application would take on the order of
750ms to load, assuming no parser caching.  I don't know how much of that is
parsing and how much is execution; I only mention it at all since I don't
think there have been any attempts at all so far to put numbers to the
performance question.

On Fri, Feb 11, 2011 at 5:44 PM, Nicholas Zakas <nzakas at yahoo-inc.com>wrote:

> Thanks Kyle, those comments were helpful. I've simplified and refined my
> proposal based on them and the others in this thread:
>
> https://docs.google.com/document/d/1wLdTU3xPMKhBP0anS774Y4ZT2UQDqVhnQl3VnSceDJM/edit?hl=en&authkey=CJ6z2ZgO
>
> Summary of changes:
> * Changed "noexecute" to "preload"
> * No HTML markup usage
>

It seems consistent to allow specifying it via markup, like defer and async,
so scripts can be preloaded in markup, but it's a minor point.  I suppose
handling this sanely would also require another attribute, indicating
whether onpreload has been called yet, so maybe it's not worth it.

* No change to "load" event
> * Introduction of "preload" event
> * Removed mention of "readyState"
>

It's hard to read your example, since the indentation was, I think, mangled
during the paste into the document.

I think the example code can be simplified a lot to demonstrate the API more
clearly.  I've attached a simplified version.  It also explicitly catches
exceptions from execute() and calls errorCallback, and demonstrates feature
checking (in a simpler way).

-- 
Glenn Maynard


More information about the whatwg mailing list