[whatwg] Script preloading
bruno at hexanet.net
Sat Jul 13 02:14:28 PDT 2013
Nobody should be arguing about 'pet projects' or 'ignorance' here, it's
really uncalled for. Characterizing strong counter arguments or strong
views as 'attacks' is also unnecessary.
May I remind you Alex that this thread's request for feedback started with
two simple questions: "Would something like this, based on proposals from
a variety of people in the past, work for your needs?"
"Is there something this doesn't handle which it would need to handle?"
Ian even implied not quite "understanding these requirements" with the
As such, refusing to considerate any new requirements, or shooting down
strong critics, as EYE-WATERING-LENGTH as they may be, AND/OR suggesting
that Ian's early proposal is an already foolproof (completely thought
through solution) seems a bit misplaced. If my English as a second
language serves me well, that's not quite the initial premise of this
thread and its original questions.
Anyway, I just want to make additional arguments as best I can with a
use-case below; in support to Kyle Simpson's apparently struggle to try
and get part of his points across, along with one of his remark; and a
more direct attempt to answer Ian's questions.
On 7/12/13 8:31 AM, "Kyle Simpson" <getify at gmail.com> wrote:
>It'd be great if it just could automatically re-try or fallback to
>another URL. Yeah, that sounds cool. Sure, will do."
Re-try or fallback is/was a first requirement for me, before even
considered using a CDN for jQuery. The consideration for such ability for
a reliable fallback with script.fail event is quite basic.
To demonstrate than MANY people care about that. This page has been viewed
over 60000 times with high reputation scores:
Yet, that thread offers a fallback that may not even work depending on
DNS/network conditions. Most people blindly assume it will. But the fact
is, it doesn't.
As a minimum you are looking at a 10-30sec delay. i.e. Page considered
dead! Visitor likely gone...
Fallback(s) in a cloud-centric environment is moot. Right now we have
nothing to prevent a page from being partially or fully down due to
external js failures.
Let's take this premise.
I am gonna use Google's CDN because it's recommended and improve
And CNDJS for the shim library because it's convenient and fast. So:
<script id="jquery" src="googleCDN/jquery.js" async></script>
<script id="shims" src="CDNJS/shims.js" async></script>
<script dependencies="shims jquery" src="mysite/myscript.js"
Ok, so I rely on the browser to automatically load dependencies for me
Either googleCDN's or CDNJS fail for whatever reason.
What does the browser do?
Try one more time, report me an error?
Let me down with a failed load all together? My one time visitor, which I
also happen to have payed Google Adwords for the click, cannot visit my
site. Wasted $5 bucks? Lost a potential sale?
So how does this proposal handle such situation, or better put: gives me
control to determine what has loaded or wether it's going to succeed in
loading at all?
Is it not sufficient in this regard, and does not propose to handle
Also using scripts "id(s)" have the problem of possible conflict with
other elements using duplicate ids in the document, in an unpredictable
fashion (unique identifier must). I.e. IDs can't be used. I actually faced
that conflict myself early on...
Nor can 'class', nor can 'name' (which I use but it only works when used
dynamically, and not allowed officially as markup). That would leave you
with 'title' or the need for a new attribute. And no easy/fast
Overall, the ex2-getify.js's approach to address control, failures,
execution, and fallback matters is a better way to deal with this, or at
least simpler to comprehend for a developer at large, in my view.
I personally wouldn't go as far as 'script.error', I think that's maybe
pushing it. Although a richer script.readyState(s) type protocol could
surely benefit from it.
But having such states as 'waiting', 'failed', 'loading' and 'loaded' are
needed when it comes to 'guaranteed' pre-loading/loading. script.onload
being rather insufficient.
If link[rel=subresource] is adopted and works in comprehensive way with
possible states/events of its own. The need to pause execution is not
necessarily a requirement here. But if it's to use a basic
link[rel=subresource] and implement whenneeded/markNeeded() on top of it,
both without any notion of 'waiting', 'failed', 'loading' or 'loaded', to
fully accomplish one goal, it feels redundant and still missing critical
components for reliability sake. I for one wouldn't really foresee myself
using whenneeded/markNeeded() anyway.
While existing method for delayed parsing/execution work and are
supported. They are still considered a hack. And as per Jake's comment, if
delayed parsing/execution were adopted "dependencies" would be redundant.
Which very much links the two matters together.
In closing, I think it would be fair to say that with almost every site
now depending on so many external APIs or CDNS; that in addition to
performance concerns, 'script loading failure detection' as a tool for
implementing fallbacks are needed and a requirement for me. Which is my
main argument for this email.
More information about the whatwg