[whatwg] Feedback regarding script execution

Ian Hickson ian at hixie.ch
Wed Sep 7 14:53:49 PDT 2011


Note that I recently checked in some changes to <script> to expose a 
readyState IDL attribute and to fire 'readystatechange' events, based on 
what IE has implemented here.

On Tue, 24 May 2011, Alexandre Morgaut wrote:
> 
> My understanding right now is that:
> 
> - if we want a script to be downloaded without blocking the UI, we 
> should use "async"

Right.


> - implementors shouldn't block the UI while parsing the JS (but it 
> should still respect the order of execution when required)

I'd say "need not block", rather than "should not block". The latter is 
more a judgement call.


> Some script are not "required" to be executed until a user action like a 
> "click", so that's great if their loading, parsing, and execution 
> doesn't block.  For progressive enhancement, I like to run first only 
> what is mainly required.
>
> But once the user does this kind of more specific action, the event 
> handler may not work as a the script providing the required behavior 
> might not be yet executed.
> 
> Try to resolve this:
> 
> - I consider scripts providing such behaviors as named modules
>
> - when such user action happens
>
>         - I fix a setTimeout() with a handler showing a "processing" 
> feed back to the UI if the action takes to much time, as I would if I 
> need to fetch data
>
>         - instead of invoking my specific behavior, I add its invocation 
> in a named callbacks list which will be looked by the "module" having 
> the same name at the end of its initialization
>
>                 - Note: the callback will cancel the setTimeout or 
> remove the "processing" feedback
> 
> So I can already do this without new vendor implementation.

Yup, that should work pretty well in the current model.


> It just add more asynchronous code in my app to handle But, to be 
> honest, my first intuition was:
>
> - change the "async" attribute value of the script element to false to 
> block the script execution of the "click" (or any other) event handler 
> until the required script as been executed
>
>         -> it could prevent the requirement of handling asynchronous 
> code via callback lists, but I'm not sure how much it is acceptable

Yeah, I could see how that would be an intuitive way of thinking it would 
work. It doesn't (as you discovered). Since it would mean the browser just 
locking up while the download finished, or at least showing its own 
progress bar UI rather than something that feels like part of the app, I'm 
not sure how desireable it really would be anyway.


On Tue, 24 May 2011, Nicholas Zakas wrote:
> 
> There is a general need on the web for speed (duh). We want not just the 
> current experience to be fast but the *next* experience to be fast. The 
> next experience is frequently made up of HTML, CSS, and JavaScript that 
> isn't needed immediately, and so is deferred. The timing of when to 
> download these resources is something that we've struggled with due to 
> the various ways different resources get loaded.
> 
> We did a number of things on the Yahoo! homepage to help the next 
> experience be fast (note: not all are still deployed):
> 
> 1) After page load, we started to load CSS and JavaScript for the apps 
> we previously had on the left side of the page. The intent was to make 
> sure they came up as quickly as possible.
> 
> 2) Preload images that may be needed for the next experience via (new 
> Image().src = "foo.png").
> 
> 3) We delayed loading JavaScript for the search assist dropdown until 
> the user set focus to the searchbox.
> 
> Your assertion that loading a file that simply defines a function will 
> solve the problem is a bit too simplistic for most web applications. 
> This is what we did in #1 and it actually caused performance problems as 
> the browser would stop to execute the script as it came in, interrupting 
> whatever the user was already doing. Amazingly, delays of longer than 
> 100ms are actually perceivable by users[1], and small delays interrupt 
> running animations very noticeably (a common complaint we had while 
> testing some of the preload strategies).

Fixing this latency can be done entirely in the browsers today, by moving 
compilation off the main thread. We don't have to change the language to 
support this feature.


> Moving parsing and compilation to a background thread solves part of the 
> problem, the problem where doing so freezes the UI currently. It doesn't 
> solve what I consider to be the important part of the problem, and that 
> is the inability to have a JavaScript resource downloaded but not 
> applied to the page. The best next experience can only be achieved when 
> the resources are ready and then applied at the right moment.

Since the download and compilation can all be done in the background, 
then, *once browsers do this*, scripts can be structured to do nothing but 
provide a single callback that "applies" the script, before which the 
script does nothing. That would do exactly what you describe, as far as I 
can tell.


> To be clear: this is not a problem that is unique to JavaScript. The 
> same problem exists with all external resources, such as images and CSS. 
> The difference is that both images and CSS allow you to download the 
> resource and then apply to the page later when you want (images via new 
> Image(), CSS via dynamic <link> element that is only applied when added 
> to the page).

Script allows this too, just by having the script do nothing except define 
a callback. Defining a callback is trivially fast to do (the cost is in 
the download and compilation stages, which can be done in the background).


I recommend approaching the browser vendors directly to convince them that 
moving compilation to a secondary thread is a worthwhile optimisation.


On Wed, 25 May 2011, Nicholas Zakas wrote:
>
> Parsing and compilation on a background thread removes some of the 
> problem but not all of it. Ultimately, even if the script is just a 
> function waiting to be called, the browser still executes it in a 
> blocking fashion after parsing and compilation. It's the execution that 
> is troublesome part because it interferes with the UI. The fact that the 
> script isn't doing much is helpful, but once again, there will be a 
> non-zero interrupt that can affect user experience.

If the script does nothing but define a callback, executing it will take 
roughly zero time. Certainly far less time than is perceivable or than 
could interfere with animations.


On Thu, 26 May 2011, James Robinson wrote:
> 
> This isn't practical if the contents of the <script> are not under the 
> author's direct control.  For example, an author that wanted to use 
> jquery would create a <script> tag with the src set to one of the 
> popular jquery mirrors (to maximize the chance of the resource being 
> cached), but then have no control over when the script actually 
> evaluated.  It's easy to imagine a case where the author wants to 
> initiate the network load as soon as possible but might not need to 
> actually start using the code until some point further along in the 
> loading sequence, possibly after a simple form of the page is made 
> visible to the user.

I think it's pretty clear that if browsers started moving as much JS 
processing as possible off the main thread, that libraries would start 
taking advantage of this, so I don't think we need to worry about this 
case. It's an issue only during transition.


> For this use case I think it would be handy to have a way to express 
> "please download this script but do not start evaluating it until I'm 
> ready".  As a straw man, what about using the disable attribute?  When 
> the load completes, if the disabled attribute is set then the script is 
> not evaluated until the disabled attribute is unset.  After the script 
> evaluates it goes into a state where the disabled attribute is ignored.  
> Browsers that ignored this behavior would evaluate the script sooner 
> than the author might expect, but it's usually every easy to detect when 
> this is happening and react appropriately.

If all you want to do is download it (not preprocess it) then you can just 
use XHR and then stick the script contents into an inline <script>.


On Fri, 27 May 2011, Boris Zbarsky wrote:
> On 5/27/11 1:10 PM, Aryeh Gregor wrote:
> > Also, consider a third possibility: currently, the part of<script 
> > async> that's captured by the first timing in Ian's/Boris' example 
> > (whether it's parsing or compilation or whatever) blocks the main 
> > thread in browsers, even though it's async.  (Right?)
> 
> This is true at the moment in Gecko+Spidermonkey.  I can't speak for 
> others.

zewt's test case is interesting:

https://zewt.org/~glenn/test-top-level-context-execution/

On my machine, Gecko takes about 150ms to compile the code (time from 
download finishing to script starting), about 0ms to run the top-level 
code, and about 10ms to run the initialisation function. Chrome, on the 
other hand, takes about 10ms to compile the code, about 0ms to run the 
top-level code, and about 150ms to run the initialisation function.

Gecko demonstrates that it is possible to move the vast bulk of the 
execution time here off the main thread (though it hasn't yet been done), 
indicating that we don't need a new feature to make preparing this kind of 
script fast. Chrome, similarly, shows that today for this kind of code 
there is no need for a feature to explicitly make the code prepare faster, 
though in Chrome's case actually applying the code would be slow.


On Tue, 24 May 2011, Steve Souders wrote:
>
> I only want to do the processing on the second script if the user 
> activates the feature. This is important on mobile now to reduce power 
> consumption but is also important on desktops as CPUs become more power 
> sensitive and JS payloads grow.

If the problem is power consumption, then we really have to consider the 
problem at a higher level. Will delaying the user by 100ms where the user 
is just waiting looking at a "please wait" sign on the screen use more or 
less power than downloading and running the script in the background so 
that the user spends less total time with the screen on? Given the battery 
cost of a big LCD screen vs the battery cost of the radio and CPU, this 
might not be a clear-cut decision.

To consider the battery cost here I'd really need to look at a concrete 
example and do some measurements.


On Mon, 30 May 2011, Kyle Simpson wrote:
>
> [...]

The majority of your comments are essentially disagreeing with the basic 
premise that if authors want to optimise their code, they need to be able 
to control that code.

Fundamentally, I disagree with the idea that the only code that a site can 
update is the script loading code. Sites need to be able to handle 
security problems in any of the code they rely on, and the same techniques 
that can be used to resolve security problems can be used to solve loading 
problem. Also, we need to design for the future, not the past. There will 
be more sites made in the future than have been made to date. While it's 
important that we not break past sites, it's more important to make future 
sites faster than it is to make today's sites faster, especially in the 
case of sites that simply can't (or won't) change their scripts.


> There's a whole bunch of comments in this thread which allude to the 
> idea that the problems being expressed are simple to fix if authors just 
> change how they write their code. While that may (or may not) be true, 
> it's irrelevant to the real-world web we're trying to fix *right now*, 
> because the vast majority of JavaScript is not written this way (such 
> that it's entirely modular and causes no side-effects on "execution").

Nothing discussed in this forum will help "right now". Any changes to the 
spec will take months if not years to deploy to a majority of browsers.


> Continuing to suggest in this thread that the solution is to modify the 
> script is both aggravating and unhelpful, as it completely misses the 
> most important majority use-case: loading (almost) all the current 
> JavaScript on the web.

Since pages using those scripts are going to need to be adapted to use any 
new feature we come up with, why would such adaptations be limited to the 
script loading library and not other libraries?

It's not like these adaptations are complicated. All we're talking about 
here is moving things into a function that can be run later, instead of 
running everything immediately. It's maybe a half-dozen lines of 
boilerplate code in the most complicated case, as far as I can tell.


> > But really, it seems better to structure your scripts so that they 
> > don't do anything when they are run except expose a callback that you 
> > can run when you want the code to run.
> 
> Sure, it's better. But it's also unrealistic to expect all the millions 
> of scripts on the web to change simply because I want my pages to load 
> faster.

Any changes to the spec are going to have to result in changes to pages 
for those pages to take advantage of the spec changes.

In practice, sites are improved individually, and it is not at all 
unrealistic to expect a site to improve its scripts.


> And as I said, it's untenable to suggest that my ONLY remedy is to 
> self-host and modify such scripts.

No need to self-host them. Just like the scripts in question can be 
patched to fix security bugs, they can be patched to have different setup 
behaviour, and those patches can be hosted in the same place.


> > What problems do these solutions solve?
> 
> Specifically, they solve this problem:
> 
> I want to load two or more scripts that I don't control, from locations 
> that aren't my own, in parallel (for performance), and I want them to 
> execute in order..... BUT I want to control when that execution starts.

I think the better solution to this problem is to get control over those 
scripts and locations. Without a concrete example of the problem (i.e. an 
actual site, an actual explanation of why you can't control scripts that 
you are trusting to run on your site!), it is hard to evaluate the 
situation.


> This thread seems to be so easily side-tracked into the minutia of 
> conjecturing about background thread parsing and different 
> implementation details. I wish we could just take as a given that 
> parsing/execution of a script are not zero-cost (though they may be 
> smaller cost, depending on various things), and that ANY control that a 
> web performance optimization expert can get in terms of when non-zero 
> cost items happen is, in general, a good thing.

I don't think that's a given at all. For example, it doesn't seem to be a 
given that giving authors control over whether GC happens before or after 
printing (say) is a good thing.

There is a huge risk in giving authors control: many authors are not 
experts, and will make poor choices.


> The thread also makes a lot of references to <script async> and how that 
> seems to be the silver-bullet solution. The problem is two-fold:
> 
> 1. <script async> only effectively tells a user-agent to make the 
> loading of that script resource happen in parallel with other tasks, and 
> that it can choose to execute that script at any point whenever it feels 
> is good. This means that the script in fact can be executed still before 
> DOM-ready, or between DOM-ready and window.onload, or after 
> window.onload. Thus, the script's execution effects the page's rendering 
> in an intermittent way, depending on network speeds, etc.
> 
> <script defer> on the other hand specifically tells the script to wait 
> on its execution until after onload.

Actually both async and defer scripts in a page will run before the 
'onload' handler for the page.


On Mon, 13 Jun 2011, Diogo Resende wrote:
> > 
> > If you just mean that you have several different "panels" or "modes" 
> > that the user can switch between, then I would recommend just having 
> > several <section> elements, one for each module, and all but the 
> > active one have a hidden="" attribute. Script-wise, you'd just have 
> > each script loaded and active from the start, they'd just only work 
> > with their own sections.
> 
> The sections are not possible to enumerate or can be too many to 
> enumerate. Imagine MobileMe, where each section of the web app has it's 
> own interaction (java)script (I suppose). I'm looking for the best way 
> to load that when switching between sections/modules and discarding it 
> when switch back.
> 
> Perhaps MobileMe could have the interaction all bundled in one script, 
> but for a bigger app to have a fast load/response time, the scripts 
> should be divided.. I think..

I'd have to see concrete examples to be able to profile them to determine 
this one way or the other.

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


More information about the whatwg mailing list