<div dir="ltr"><blockquote class="gmail_quote" style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; ">
The only thing archives get you IMO is difficulty with caching algorithms, annoyances rewriting URLs, potentially blocked parsing, and possibly inefficient use of network bandwidth due to reduced parallelization.<br></blockquote>
<div> </div><div>I don't see any reason that parsing would need to be blocked any more than it already is. No rewriting of URLs would be necessary at all, and I have already provided suggestions for simple solutions that would prevent unnecessary blocking.</div>
<div><br></div><blockquote class="gmail_quote" style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; ">
Server sharding and higher connection limits solve the problem of artificially low connection limits. JS script references block further parsing in most browsers; the correct solution to this, as Ian said, seems like some variant of Safari's optimistic parser. Referencing large numbers of tiny images causes excessive image header bytes + TCP connection overhead that can be reduced or eliminated with CSS spriting.</blockquote>
<div> </div>Server sharding and CSS sprites are both artificial solutions that are used to deal with limitations of the existing deployment model. If you are worried about fragility, look no further than css sprites. They have to be background images, and require precise measurement of size and location. This creates extremely tight coupling between the css code and the file itself. Not to mention the maintenance of the sprite images themselves.<div>
<br></div><div>Clearly we are already dealing with the problems of resource loading and how to make it most efficient. Our existing solutions are widely varied and complex, but all of them result in changes to our html/css/js code that would not already be there if we did not have that limitation.</div>
<div><br></div><div>It seems to me that many of the additions to the HTML spec are there because they provide a standard way to do something we are already doing with a hack or more complicated means. CSS sprites are clearly a hack. Concatenating js files are clearly a hack. Serving from multiple sub-domains to beat the connection limit is also a workaround. My proposal is intended to approach the deployment issue directly, because I think it is a limitation in the html spec itself and therefore, I think the html spec should provide its own solution. My proposal may not be the best way, but assuming the issue will be dealt with eventually by some other party through some other means does not seem right either.</div>
<div><br></div><div>-Russ</div><div><br></div><div><br><div class="gmail_quote">On Wed, Jul 30, 2008 at 4:27 AM, Peter Kasting <span dir="ltr"><<a href="mailto:pkasting@google.com">pkasting@google.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div dir="ltr"><div class="gmail_quote"><div class="Ih2E3d">On Tue, Jul 29, 2008 at 5:10 PM, Russell Leggett <span dir="ltr"><<a href="mailto:russell.leggett@gmail.com" target="_blank">russell.leggett@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div><blockquote class="gmail_quote" style="margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0.8ex;border-left-width:1px;border-left-color:rgb(204, 204, 204);border-left-style:solid;padding-left:1ex">
That is a performance killer.</blockquote><div><br></div></div><div>I don't think it is as much of a performance killer as you say it is. Correct me if I'm wrong, but the standard connection limit is two.</div></div>
</blockquote><div><br></div></div><div>The standard connection limit is 6, not 2, as of IE 8 and Fx 3. I would be very surprised if this came back down or was not adopted by all other browser makers over the next year or two.</div>
<div><br></div><div>Furthermore, the connection limit applies only to resources off one host. Sites have for years gotten around this by sharding across hosts (<a href="http://img1.foo.com" target="_blank">img1.foo.com</a>, <a href="http://img2.foo.com" target="_blank">img2.foo.com</a>, ...).</div>
<div><br></div><div>There are many reasons resources can cause slowdown on the web, but I don't view this "archive" proposal as useful in solving them compared to existing tactics. Server sharding and higher connection limits solve the problem of artificially low connection limits. JS script references block further parsing in most browsers; the correct solution to this, as Ian said, seems like some variant of Safari's optimistic parser. Referencing large numbers of tiny images causes excessive image header bytes + TCP connection overhead that can be reduced or eliminated with CSS spriting.</div>
<div><br></div><div>The only thing archives get you IMO is difficulty with caching algorithms, annoyances rewriting URLs, potentially blocked parsing, and possibly inefficient use of network bandwidth due to reduced parallelization. Archives remove the flexibility of a network stack to optimize parallelization levels for the user's current connection type (not that I think today's browsers actually do such a thing, at least not well; but it is an area with potential gains).</div>
<div><br></div><font color="#888888"><div>PK</div></font></div></div>
</blockquote></div><br></div></div>