Generic Bundling

Jorge Chamorro jorge at jorgechamorro.com
Fri Oct 25 03:24:44 PDT 2013


On 24/10/2013, at 17:06, François REMY wrote:

> HTTP 2.0 can send you multiple files in parallel on the same connection: that way you don't pay (1) the TCP's Slow Start cost, (2) the HTTPS handshake and (3) the cookie/useragent/... headers cost.

Doesn't connection:keep-alive deal with (1) and (2) nicely?

> Under HTTP 2.0, you can also ask for the next files while you receive the current one (or send them by batch), and that reduces the RTT cost to 0.

Http2.0 doesn't (and can't) fix the 1 RTT per request cost: it's just like http1.1.

If http2.0 lets me ask for n files in a single request then yes, the RTT would be ˜ 1, or 1/n per request if you will, which is just like asking for a .zip in http1.1

> Also, the server can decide to send you a list of files you didn't request (à la ZIP), making totally unnecessary for your site to ask for the files to preload them.

Can a server always know what the page is going to need next... beforehand? Easily?

> The priority of downloads is negotiated between the browser and the server, and not dependent on the 6 connections and the client.

Yes, that sounds great!

> The big advantage of the HTTP2 solution over the ZIP is that your site could already load with only the most important files downloaded while if you use a ZIP you've to wait until all files have been downloaded.

1.- Bundle *wisely*
2.- n gzipped files multiplexed in a single http2.0 connection don't necessarily arrive faster than the same files .zipped through a non-multiplexed http1.1 connection: multiplexing has an overhead (at both ends) that http1.1 hasn't.
3.- Yes, you can't (you can, but shouldn't until you've got the index which comes last) unzip a .zip as it arrives, but knowing for sure that all its files are cached (after unzipping) is a plus, imo.
4.- It's not http2.0 *or* .zip bundling. We could have both. Why not?

> From a performance point of view, this is an issue. Also, since you can only start analyzing the resources at that time, you will overload the CPU at that time. If you can unzip the files one by one, you can spread the load over a much longer time.

Overload the cpu? :-P

> ± In the equation you paint above something important is missing: the fact that
> ± there's a round-trip delay per request (even with http2.0), and that the only
> ± way to avoid it is to bundle things, as in .zip bundling, to minimize the
> ± (number of requests and thus the) impact of latencies.
> 
> Go find some HTTP2 presentation, you'll learn things ;-)

Look, I've done it, I♥it, it's awesome, and I keep thinking that .zip bundling would be a nice thing to have too.

-- 
( Jorge )(); 


More information about the es-discuss mailing list