[whatwg] <a onlyreplace>
ian at hixie.ch
Sun Oct 18 02:30:21 PDT 2009
On Fri, 16 Oct 2009, Aryeh Gregor wrote:
> I'm drawn back to my original proposal. The idea would be as follows:
> instead of loading the new page in place of the new one, just parse it,
> extract the bit you want, plug that into the existing DOM, and throw
> away the rest. More specifically, suppose we mark the dynamic content
> instead of the static.
> Let's say we add a new attribute to <a>, like <a onlyreplace="foo">,
> where "foo" is the id of an element on the page. Or better, a
> space-separated list of elements. When the user clicks such a link, the
> browser should do something like this: change the URL in the navigation
> bar to the indicated URL, and retrieve the indicated resource and begin
> to parse it. Every time an element is encountered that has an id in the
> onlyreplace list, if there is an element on the current page with that
> id, remove the existing element and then add the element from the new
> page. I guess this should be done in the usual fashion, first appending
> the element itself and then its children recursively, leaf-first.
On Fri, 16 Oct 2009, Tab Atkins Jr. wrote:
> Single-page apps are already becoming common for js-heavy sites. The
> obvious example is something like Gmail, but it's becoming more common
> everywhere. The main benefit of doing this is that you never dump the
> script context, so you only have to parse/execute/apply scripting *once*
> across the page, making really heavy libraries actually usable.
> In fact, writing a single-page app was explicitly given as a suggestion
> in the "Global Script" thread. Even in contexts with lighter scripts,
> there can still be substantial run-time rewriting of the page which a
> single-page app can avoid doing multiple times (frex, transforming a
> nested list into a tree control).
> The problem, though, is that single-page apps are currently a bit clunky
> is relatively large and clunky, even in libraries like jQuery which make
> the process much simpler. It requires you to architect your site around
> the design, either producing a bunch of single-widget files that you
> query for and slap into place, or some relatively complex client-side
> logic to parse data structures into HTML. It's also very hard to get
> accessibility and graceful degradation right, requiring you to basically
> completely duplicate everything in a static form. Finally, preserving
> bookmarkability/general deeplinking (such as from a search engine)
> requires significant effort with history management and url hacking.
> Aryeh's suggestion, though, solves *all* of these problems with a single
> trivial attribute. You first design a static multi-page site like
> normal, with the only change being this attribute on your navigation
> links specifying the dynamic/replaceable portions of the page. In a
> legacy client, then, you have a perfectly serviceable multipage site,
> with the only problems being the reloading of js and such on each
> In a supporting client, though, clicking a link causes the browser to
> perform an ordinary request for the target page (requiring *no* special
> treatment from the author), parse/treebuild the new page, and then yank
> out the relevant fragments and replace bits in the current page with
> them. The url/history automatically updates properly; bookmarking the
> page and visiting it later will take you to appropriate static page that
> already exists. Script context is maintained, listeners stay around,
> overall page state remains stable across 'pageloads'.
> It's a declarative, accessible, automatic, and EASY way of creating the
> commonest form of single-page apps.
> This brings benefits to more than just the traditional js-heavy apps. My
> company's web site utilizes jQuery for a lot of small upgrades in the
> page template (like a hover-expand accordion for the main nav), and for
> certain things on specific pages. I know that loading the library, and
> applying the template-affecting code, slows down my page loads, but it's
> not significant enough to be worth the enormous effort to create an
> accessible, search-engine friendly single-page app. This would solve my
> problem trivially, though, providing a better overall UI to my visitors
> (snappier page loads) without any real effort on my part, and without
> harming accessibility or SEO.
> This also trivially replaces most/all uses of bad mechanisms like
> <frameset> used to address similar problems (such as maintaining state
> on a complex nav).
> The only addition I'd make to this is to allow a tag in the <head> that
> specifies default replaceability for all same-origin links. Perhaps just
> an attribute on <base>? It would accept a space-separated list of ids,
> just like the @onlyreplace attribute on <a>s. An @onlyreplace attribute
> on a link would completely override this default (this would allow me
> to, frex, have the mainnav only replace the subnav, while all other
> links replace the content instead).
> I believe that the single-page app pattern solves a lot of problems that
> exist today and will persist into the future, and that it's worthy of
> being made into a declarative, html-supported mechanism. I also believe
> that Aryeh's suggestion is the correct way to go about this.
> The only problem I can see with this is that it's possible for authors
> to believe that they only need to actually write a single full page, and
> can just link to fragments containing only the chunk of content to be
> replaced. This would mostly break bookmarking and deeplinking, as
> visitors would just receive a chunk of unstyled content separated from
> the overall page template. However, because it breaks so *visibly* and
> reliably (unlike, say, framesets, which just break bookmarking by
> sending you to the 'main page'), I think there would be sufficient
> pressure for authors to get this right, especially since it's so *easy*
> to get it right.
On Fri, 16 Oct 2009, Tab Atkins Jr. wrote:
> 1. <iframe>s, like frames before them, break bookmarking. If a user
> bookmarks the page and returns to it later, or gets deeplinked via a
> search engine or a link from a friend, the <iframe> won't show the
> correct content. The only way around this is some fairly non-trivial
> navigates the iframe, and parsing a deeplink url into an appropriate url
> for the iframe on initial pageload. @onlyreplace, on the other hand,
> automatically works perfectly with bookmarking. The UA still changes
> urls and inserts history appropriately as you navigate, and on a fresh
> pageload it just requests the ordinary static page showing the
> appropriate content.
> 2. <a target> can only navigate one iframe at a time. Many/most sites,
> though, have multiple dynamic sections scattered throughout the page.
> The main site for my company, frex, has 3 (content, breadcrumbs, and
> section nav) which *cannot* be combined to display as a single <iframe>,
> at least not without including a whole bunch of static content as well.
> additional iframes. @onlyreplace, on the other hand, handles this
> seamlessly - just include multiple ids in the attribute value.
> 3. <iframe>s require you to architect your site around them. Rather
> than a series of independent pages, you must create a single master page
> and then a number of content-chunk mini-pages. This breaks normal
> authoring practices (though in some ways it's easier), and requires you
> to work hard to maintain accessibility and such in the face of these
> atrophied mini-pages. @onlyreplace works on full, ordinary pages.
> It's *possible* to link to a content-chunk mini-page instead, but this
> will spectacularly break if you ever deeplink straight to one of the
> pages, so it should become automatic for authors to do this correctly.
> 4. <iframe>s have dubious accessibility and search effects. I don't
> know if bots can navigate <a target> links appropriately. I also
> believe that this causes problems with screen-readers. While either of
> these sets of UAs can be rewritten to handle <iframe>s better (and
> handle @onlyreplace replacement as well), with @onlyreplace they *also*
> have the option of just completely ignoring the attribute and navigating
> the site as an ordinary multi-page app. Legacy UAs will automatically
> do so, providing perfect backwards compatibility.
> 1. One of the big beneficiaries of @onlyreplace will be fairly ordinary
> sites that are currently using an ordinary multi-page architecture.
> All they have to do is add a single tag to the <head> of their pages,
> and they automatically get the no-flicker refresh of a single-page app.
> These sites are *already* grabbing the whole page on each request, so
> @onlyreplace won't make them take any *additional* bandwidth. It will
> merely make the user experience smoother by reducing flicker and keeping
> js-heavy elements of the page template alive.
> 2. Even though site templates are usually weighter than the dynamic
> portions of a site, it's still not a very significant wasteage. For
> comparison, my company's main site is roughly 16kb of template, and
> somewhere around 2-3k of dynamic page content. (Aryeh - I gave you
> slightly different numbers in chat because I was counting wrong.) So
> that's a good 85% of each request being thrown away as irrelevant.
> However, it's also *only 16kb*, and that's UNCOMPRESSED - after standard
> gzip compression the template is worth maybe 5kb. So I waste 5kb of
> bandwidth per request. Big deal. (According to Philip`, my company's
> site's weight is just on the low side of average.)
> 3. Because this is a declarative mechanism (specifying WHAT you want,
> not HOW to get it), it has great potential for transparent optimizations
> behind the scenes. For example, the browser could tell the server which
> bits it's interested in replacing, and the server could automatically
> strip full pages down to only those chunks. This would eliminate
> virtually all bandwidth waste, while still being completely transparent
> to the author - they just create ordinary full static pages. Heck, you
> could even handle this yourself with JS and a bit of server-side coding,
> intercepting clicks and rewriting the urls to pass the @onlyreplace data
> in a query parameter, and have a server-side script determine what to
> return based on that. Less automatic, but fairly simple, and still
> easier than using JS to do this in the normal AJAX manner. (And UAs
> properly, just with a bit more bandwidth waste.)
> Semantically the replace operation should be identical to grabbing the
> appropriate chunk of text from the new page and setting is as the
> outerHTML of the appropriate element. Any <script>s that are located
> within this chunk would run in the exact same manner. Scripts elsewhere
> in the new page would not be run.
> document.write()s contained in <script> blocks inside the target
> fragment will run when they get inserted into the page, but
> document.write()s outside of that won't. Producing the target fragment
> with document.write() is a no-go from the start. Don't do that anyway;
> it's a bad idea.
On Fri, 16 Oct 2009, Jonas Sicking wrote:
> We actually have a similar technology in XUL called "overlays" ,
> though we use that for a wholly different purpose.
> Anyhow, this is certainly an interesting suggestion. You can actually
> mostly implement it using the primitives in HTML5 already. By using
> pushState and XMLHttpRequest you can download the page and change the
> current page's URI, and then use the DOM to replace the needed parts.
> The only thing that you can't do is "stream" in the new content since
> mutations aren't dispatched during parsing.
> For some reason I'm still a bit uneasy about this feature. It feels a
> bit fragile for some reason. One thing I can think of is what happens if
> the load stalls or fails halfway through the load. Then you could end up
> with a page that contains half of the old page and half the new. Also,
> what should happen if the user presses the 'back' button? Don't know how
> big of a problem these issues are, and they are quite possibly fixable.
> I'm definitely curious to hear what developers that would actually use
> this think of the idea.
See also Daniel's HTMLoverlays:
On Sat, 17 Oct 2009, Dion Almaer wrote:
> This feels like really nice sugar, but maybe the first step should be to
> get the shim out that gets it working using JS now.... and then see how
> it works in practice. I totally understand why this looks exciting, but
> I have the same uneasiness as Jonas. It feels like a LOT of magic to go
> grab a page and grab out the id and ..... and I am sure there are edges.
> Cool idea for sure! It also feels like this should work nicely with the
> history/state work that already exists.
On Sat, 17 Oct 2009, Jonas Sicking wrote:
> Yeah, I think this puts the finger on my uneasiness nicely. There's
> simply a lot of stuff going on with very little control for the author.
> I'd love to see a JS library developed on top of pushState/
> XMLHttpRequest that implements this functionality, and then see that JS
> library deployed on websites, and see what the experiences from that
> If it turns out that this works well then that would be a strong case
> for adding this to browsers natively.
> In fact, you don't even need to use pushState. For now this can be faked
> using onhashchange and fragment identifier tricks. It's certainly not as
> elegant as pushState (that is, after all, why pushState was added), but
> it's something that can be tried today.
On Sat, 17 Oct 2009, Nelson Menezes wrote:
> Well, here's a badly-hacked-together solution that emulates this
> I think it'll be helpful even if it only gets used in a JS library as
> you mention (change the attribute to a classname then). Still, it can be
> made to work with today's browsers:
On Sat, 17 Oct 2009, Schuyler Duveen wrote:
> this technique several years ago. One place you can see it publicly is
> the swapFromHttp() function at:
> You can see it in action on some pages like:
> where it adds in the page on the left from this file
> One of the big issues we found using it on some other sites is that
> pointers in the system became stale. Thus, only half the problem was
> Also, the problem (as I implemented it) is that XMLHttpRequest.xml has
> been very finicky in past (and current) browsers. My comments in the
> code reflect some of the things you need to make sure you're doing to
> make it work across browsers (at least if you want a DOM vs. regex
> * IE 6 needed the Content-type: text/xml
> * Firefox (?2.x) wants xmlns="http://www.w3.org/1999/xhtml" in html tag
> * IE and Safari don't handle named entities like well in this
> context and should be numeric (e.g. )
> Vendors might better serve us by reducing these hoops to jump through so
> This method did make it much easier to leverage server template code.
> But since it largely simplifies server template code, then why not stick
> with server-side solutions like Ian Bicking's:
> It's still a bit weird that this proposal, instead of allowing every
> element to be a link (like XHTML2), would allow every element to be
> something like an IFRAME (all while a thread remembering how evil
> framesets are continues).
My recomendation would be to follow the process for adding features:
In particular the bit about experimental implementations. I think this
idea looks very interesting, but it's hard to evaluate without concrete
experience with a browser implementing this (or, as Jonas suggests, a
library that hacks it in).
It seems like the kind of thing that we could adopt early on in the next
feature cycle, if it turns out to be a good solid model.
Ian Hickson U+1047E )\._.,--....,'``. fL
http://ln.hixie.ch/ U+263A /, _.. \ _\ ;`._ ,.
Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
More information about the whatwg