Justin -<div><br></div><div>Can you provide the content of the page which you used in your whitepaper? (<a href="https://bug529208.bugzilla.mozilla.org/attachment.cgi?id=455820">https://bug529208.bugzilla.mozilla.org/attachment.cgi?id=455820</a>)</div>
<div><br></div><div>I have a few concerns about the benchmark:</div><div> a) Looks like pages were loaded exactly once, as per your notes? How hard is it to run the tests long enough to get to a 95% confidence interval?</div>
<div> b) As you note in the report, slow start will kill you. I've verified this so many times it makes me sick. If you try more combinations, I believe you'll see this. </div><div> c) The 1.3MB of subresources in a single bundle seems unrealistic to me. On one hand you say that its similar to CNN, but note that CNN has JS/CSS/images, not just thumbnails like your test. Further, note that CNN pulls these resources from multiple domains; combining them into one domain may work, but certainly makes the test content very different from CNN. So the claim that it is somehow representative seems incorrect. For more accurate data on what websites look like, see <a href="http://code.google.com/speed/articles/web-metrics.html">http://code.google.com/speed/articles/web-metrics.html</a></div>
<div> d) What did you do about subdomains in the test? I assume your test loaded from one subdomain?</div><div> e) There is more to a browser than page-load-time. Time-to-first-paint is critical as well. For instance, in WebKit and Chrome, we have specific heuristics which optimize for time-to-render instead of total page load. CNN is always cited as a "bad page", but it's really not - it just has a lot of content, both below and above the fold. When the user can interact with the page successfully, the user is happy. In other words, I know I can make webkit's PLT much faster by removing a couple of throttles. But I also know that doing so worsens the user experience by delaying the time to first paint. So - is it possible to measure both times? I'm betting time-to-paint goes through the roof with resource bundles:-)</div>
<div><br></div><div>If you provide the content, I'll try to run some tests. It will take a few days.</div><div><br></div><div>Mike</div><div><br><br><div class="gmail_quote">On Mon, Aug 9, 2010 at 9:52 AM, Justin Lebar <span dir="ltr"><<a href="mailto:justin.lebar@gmail.com">justin.lebar@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="im">On Mon, Aug 9, 2010 at 9:47 AM, Aryeh Gregor <<a href="mailto:Simetrical%2Bw3c@gmail.com">Simetrical+w3c@gmail.com</a>> wrote:<br>
> If UAs can assume that files with the same path<br>
> are the same regardless of whether they came from a resource package<br>
> or which, and they have all but a couple of the files cached, they<br>
> could request those directly instead of from the resource package,<br>
> even if a resource package is specified.<br>
<br>
</div>These kinds of heuristics are far beyond the scope of resource<br>
packages as we're planning to implement them. Again, I think this<br>
type of behavior is the domain of a large change to the networking<br>
stack, such as SPDY, not a small hack like resource packages.<br>
<font color="#888888"><br>
-Justin<br>
</font><div><div></div><div class="h5"><br>
On Mon, Aug 9, 2010 at 9:47 AM, Aryeh Gregor <<a href="mailto:Simetrical%2Bw3c@gmail.com">Simetrical+w3c@gmail.com</a>> wrote:<br>
> On Fri, Aug 6, 2010 at 7:40 PM, Justin Lebar <<a href="mailto:justin.lebar@gmail.com">justin.lebar@gmail.com</a>> wrote:<br>
>> I think this is a fair point. But I'd suggest we consider the following:<br>
>><br>
>> * It might be confusing for resources from a resource package to show<br>
>> up on a page which doesn't "opt-in" to resource packages in general or<br>
>> to that specific resource package.<br>
><br>
> Only if the resource package contains a different file from the real<br>
> one. I suggest we treat this as a pathological case and accept that<br>
> it will be broken and confusing -- or at least we consider how many<br>
> extra optimizations we could make if we did accept that, before<br>
> deciding whether the extra performance is worth the confusion.<br>
><br>
>> * There's no easy way to opt out of this behavior. That is, if I<br>
>> explicitly *don't* want to load content cached from a resource<br>
>> package, I have to name that content differently.<br>
><br>
> Why would you want that, if the files are the same anyway?<br>
><br>
>> * The avatars-on-a-forum use case is less convincing the more I think<br>
>> about it. Certainly you'd want each page which displays many avatars<br>
>> to package up all the avatars into a single package. So you wouldn't<br>
>> benefit from the suggested caching changes on those pages.<br>
><br>
> I don't see why not. If UAs can assume that files with the same path<br>
> are the same regardless of whether they came from a resource package<br>
> or which, and they have all but a couple of the files cached, they<br>
> could request those directly instead of from the resource package,<br>
> even if a resource package is specified. So if twenty different<br>
> people post on the page, and you've been browsing for a while and have<br>
> eighteen of their avatars (this will be common, a handful of people<br>
> tend to account for most posts in a given forum):<br>
><br>
> 1) With no resource packages, you fetch two separate avatars (but on<br>
> earlier page views you suffered).<br>
><br>
> 2) With resource packages as you suggest, you fetch a whole resource<br>
> package, 90% of which you don't need. In fact, you have to fetch a<br>
> resource package even if you have 100% of the avatars on the page! No<br>
> two pages will be likely to have the same resource package, so you<br>
> can't share cache at all.<br>
><br>
> 3) With resource packages as I suggest, you fetch only two separate<br>
> avatars, *and* you got the benefits of resource packages on earlier<br>
> pages. The UA gets to guess whether using resource packages would be<br>
> a win on a case-by-case basis, so in particular, it should be able to<br>
> perform strictly better than either (1) or (2), given decent<br>
> heuristics. E.g., the heuristic "fetch the resource package if I need<br>
> at least two files, fetch the file if I only need one" will perform<br>
> better than either (1) or (2) in any reasonable circumstance.<br>
><br>
> I think this sort of situation will be fairly common. Has anyone<br>
> looked at a bunch of different types of web pages and done a breakdown<br>
> of how many assets they have, and how they're reused across pages? If<br>
> we're talking about assets that are used only on one page (image<br>
> search) or all pages (logos, shared scripts), your approach works<br>
> fine, but not if they're used on a random mix of pages. I think a lot<br>
> of files will wind up being used on only particular subsets of pages.<br>
><br>
>> In general, I think we need something like SPDY to really address the<br>
>> problem of duplicated downloads. I don't think resource packages can<br>
>> fix it with any caching policy.<br>
><br>
> Certainly there are limits to what resource packages can do, but we<br>
> can wind up closer to the limits or farther from them depending on the<br>
> implementation details.<br>
><br>
</div></div></blockquote></div><br></div>