[whatwg] No interface flicker across page loads, without JavaScript (was: framesets)

Aryeh Gregor Simetrical+w3c at gmail.com
Fri Oct 16 09:09:25 PDT 2009


On Fri, Oct 16, 2009 at 10:16 AM, Tab Atkins Jr. <jackalmage at gmail.com> wrote:
> Indeed, script changes should persist.  The problem he was
> highlighting, though, was the fact that a 'site bug' like that would
> be very easy to have happen accidentally.  It could even go unnoticed
> by the site developers, if they always come in through the front page
> and the content is correct there - only users following search engine
> links or bookmarks deep into the site would see the obsolete content,
> and it would *never go away* during that browsing session.
>
> This error seems like it would be very easy to make.

Hmm.  Maybe.

> As well, this still doesn't answer the question of what to do with
> script links between the static content and the original page, like
> event listeners placed on content within the <static>.  Do they get
> preserved?  How would that work?  If they don't, then some of the
> benefit of 'static' content is lost, since it will be inoperable for a
> moment after each pageload while the JS reinitializes.

Script links should be preserved somehow, ideally.  I would like to
see this be along the lines of "AJAX reload of some page content,
without JavaScript and with automatically working URLs".

> I would hope that authors never did that!  That means that if a user
> deeplinks straight into the site, they'll get the empty element.  The
> hash won't help them, since it's their first pageview.  *Hopefully*
> they'll swing by a page that has the actual contents and the hashfail
> would trigger an update, but that's not a guarantee, and in the
> meantime they have an empty element there.

I meant in conjunction with an HTTP header the browser would send,
like "Static-Hashes", that contains the hashes of all known <static>
elements.  This is like the Static-IDs that I described in my first
post.  The idea would be that a script could chop out the unneeded
parts on a per-request basis.  However, I think SDCH is a better
solution here.

> I think being updated is more important than persisting changes to
> (now out-of-date) content.

It depends on how important the changes are.  If for some reason you
have a <textarea> in <static>, and the user has entered tons of text,
saving it is fairly important.  Although you should be able to hit
"back" to retrieve it, actually, so maybe not *that* important.

> One of the big reasons Gmail is so AJAXy is because of the heavy
> script lifting it has to do on each page load.  AJAX lets them persist
> the script while updating the content.  <static> wouldn't help with
> that.

That's why script needs to persist.  My initial proposal doesn't
handle that well at all.

> Only for the first pageload.

The first page load is by far the most important.

> And separate pages for each interface widget isn't bad.  Heck, it's
> easier to maintain with everything self-contained.

Handling everything in one request is *much* simpler from the POV of
server-side scripting.  If it's separate requests, you can typically
only communicate between them if you a database of some kind.  That's
a real pain.  You're running several instances of the script which all
need to produce consistent output, and that's a lot harder than if
it's just one instance.  What if different cookies end up being sent
to different frames, for instance?  That's very possible if the user
gets logged out at some point, say.  The new page load needs to be
able to invalidate the other parts of the page somehow.

> True.  Minting a new element might be a better deal here, but having
> it inherit much of the semantics of <iframe seamless>.  Then you can
> have it contain fallback content for browsers that don't implement
> <static>, and use @src for browsers that do.  That would also allow us
> to bypass any of the <iframe> complications that might unnecessarily
> complicate use or implementation.

I still don't like the requirement for multiple pages.  It might not
be a big deal if you're dealing mainly with static content, but for
complex server-side scripts I think it would be a real pain.

So, here's a preliminary description of a use-case.  I'm not sure it's sane yet.

Use Case: A page should be able to instruct that when a user follows a
link, only part of the page is reloaded, while the rest stays fixed.

Requirements:
1) Little to no JavaScript should be required.  Large JavaScript
frameworks should not be necessary to get basic persistence of
interface state.

2) Static parts of the page should not have their state discarded,
either script-related state (e.g., registered event handlers) or other
state (e.g., user-entered text).

3) It should be possible for user agents to implement the feature so
that the static parts of the page don't flicker or jump around unless
they've actually changed.  (This might or might not be an actual
conformance requirement, but it should be possible for them to do it
if they want.)

4) It should be possible to easily attach this to an existing set of
static pages, or JavaScript-light pages produced by a web application.
 Ideally, it should be possible to do by adding a few new elements
and/or JavaScript snippets, without any special server-side code.

5) The solution must be backward-compatible.  Static pages that deploy
it should work identically in legacy browsers, or almost identically.

6) The solution should make stable, bookmarkable, sharable URLs
natural and easy.  The author should have to expend no extra effort to
get these, since they're essential to the web's health and success.

7) It would be desirable if the solution didn't require transmitting
any unchanged content, but it's acceptable to leave that to other
features (e.g., SDCH) if that's more appropriate.

8) Ideally, it should be possible to serve the same or essentially the
same content to clients that have JavaScript disabled, and get
reasonable fallback.


Assessment of current options:

* <frameset>: Passes (1), (2), (3), (7), (8).  (5) is irrelevant.
Doesn't meet (4): existing content needs to be significantly rewritten
to use frames.  Fails (6) spectacularly, which rules it out from even
being valid, let alone an acceptable solution.
* <iframe>s: As far as I can tell, same as <frameset> except it's
harder to use and doesn't support scriptless resizing.
* AJAX: Passes (2), (3), (7) (all depending on specific
implementation).  (5) is irrelevant.  Fails (1) pretty dramatically, I
think -- is it possible to write an AJAX page meeting (2), (3), (7)
from scratch in five minutes?  I'm not sure, since I admittedly don't
use JavaScript much.  Fails (4) -- pages need to be entirely rewritten
to use AJAX this heavily.  Fails (6) -- you need to do a whole bunch
of fancy footwork to get bookmarkable URLs, AFAICT.  Fails (8).

>From the perspective of features, AJAX is clearly the best existing
option.  It mostly fails in the arenas of a) ease of use (for this
purpose), and b) support for JavaScript-less clients (although that's
not a blocker requirement IMO).  For all I know HTML5 already obviates
(a) by introducing new features that make writing good AJAX apps easy,
in which case I'm barking up the wrong tree, but I'll proceed under
the assumption that that's not the case.

Frames seem overall a better model for meeting this use-case.  They
really only have one very serious problem, namely the URL thing.
(Breaking things up into separate documents is a pain too, but if that
were the only problem, they'd probably be good enough.)  Something
more along the lines of frames seems like a good starting point.

I'm drawn back to my original proposal.  The idea would be as follows:
instead of loading the new page in place of the new one, just parse
it, extract the bit you want, plug that into the existing DOM, and
throw away the rest.  More specifically, suppose we mark the dynamic
content instead of the static.

Let's say we add a new attribute to <a>, like <a onlyreplace="foo">,
where "foo" is the id of an element on the page.  Or better, a
space-separated list of elements.  When the user clicks such a link,
the browser should do something like this: change the URL in the
navigation bar to the indicated URL, and retrieve the indicated
resource and begin to parse it.  Every time an element is encountered
that has an id in the onlyreplace list, if there is an element on the
current page with that id, remove the existing element and then add
the element from the new page.  I guess this should be done in the
usual fashion, first appending the element itself and then its
children recursively, leaf-first.

This solution certainly satisfies (1), (2), (3), (8).  It doesn't
satisfy (7), but that's not essential.  It's okay on (4) -- certainly
much better than AJAX.  You just have to add the right attribute to
all appropriate links, and ensure that you have some appropriate id's
present.

The stickers are (5) and (6).  You point out that it would be easy for
authors to make the same URLs *not* end up loading the same content,
if you arrive from different places (or use a browser that doesn't
support the feature).  That's true.  It tempts authors to not include
the extra static content in non-primary pages at all, which is bad.  I
think this is fatal.

The problem is, I don't see how you could do anything else.  For
fallback in legacy user agents, you really need to have all the
content present somehow on the page.


So let's look at the idea of using <iframe static>.  It wouldn't fall
back well in UAs not supporting <iframe seamless>; but okay, let's say
that's acceptable.  The idea would be that the UA would actually
rerender the whole page, but somehow keep the <iframe>s fixed.  One
way to do this would be something like: retrieve the new page and
parse it, while keeping the old page active.  Remove all the content
from the current page until the first <iframe static>, and replace it
with the content from the new page until the first <iframe static>.
Repeat for the content between the first and second, etc.  The problem
is that you're changing around a lot of the page structure, including
possibly removing and then re-adding stylesheets and such.  I don't
know if this would meet the goal of avoiding flickering.

(1), (2), (6), (7), and (8) all pass.  (3) is iffy; details would have
to be worked out, at least.  (4) fails.  (5) isn't met so well, but
maybe acceptably.  Overall, I think this idea is probably better than
my proposals, but I don't think it's superior enough to existing
methods to be worth the speccing and implementation effort.  I'm
beginning to think it's not possible to meet all the requirements I
gave any better than existing solutions, unfortunately.  (6) is
particularly a problem for all existing solutions.


More information about the whatwg mailing list