What is recommended minimum object size for gzip performance benefits?

26,115

Solution 1

You're talking about the benefits to your bandwidth costs, but then also comparing the performance of the page load in a browser. They're two different things.

Anytime you gzip a request, something has to actually do the compression (in your case, the F5) and the client (or technically proxies) has to handle the decompression. This can add more latency to your request, depending on how capable the hardware is on both ends.

The "minimum size for gzip" is based on the time needed to compress/decompress that small of data not being helpful from a web browser experience perspective. If you are purely talking about bandwidth savings, then go ahead and set your minimum as low as you'd like, but do so knowing that you may not be giving your end users any performance gains.

Solution 2

Apache Tomcat has gzip filter and it starts to zip from 2kb, my quick test tells that it's the lowest boundary, and you can increase it at least to 3-4kb. Because for 2kb you will get a similar size at the output + time penalty for zipping.

Share:
26,115

Related videos on Youtube

utt73
Author by

utt73

About John Utter.

Updated on September 18, 2022

Comments

  • utt73
    utt73 over 1 year

    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver.

    Google recommends:

    Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger.

    We serve our content through Akamai, using their network for a proxy and CDN. What they've told me:

    Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes.

    My reply:

    What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for Facebook? (see below) Google recommends to gzip more aggressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes.Facebook headers screenshot disputing Akamai's statement

    Akamai's response:

    The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them.

    So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwidth offloaded from origin), or is there a performance gain in doing so?


    7/9/12 Update: I asked Steve Souders if there a performance gain in gzipping responses that are already smaller than a packet and what is the recommended minimum object size for gzip performance benefits, and this is his response:

    Thanks for your email. The size is somewhere between 1-5K. Apache has a default but I forget what it is - that would be a good guide.

    We do our compression on an F5 appliance, so we are going to lower it to ~350 bytes, since there are a decent amount of AJAX calls between that and 1K. The AJAX calls that are less than 350 bytes on our website are all down around 70 bytes... less than Google's recommendations... so it really seems to fall back to: know your website and adjust based on your code.

    I'll get back to this post after the F5 update runs in Production for a while. I think there will be little performance benefit, but we'll lower our Akamai costs a bit since they are serving less.

    • Admin
      Admin over 5 years
      @Steve, regarding the April edit I added welp to clarify feeling, as the expert in this field's reply did not answer either question. I was delighted to hear back from Mr. Sounders but he did not know a definitive answer either.
    • Admin
      Admin over 5 years
      As for "get back to this post after the F5 update runs in Production for a while," though I do not work with this particular web application anymore, we were successful in our goal of getting both average page load() event and Time to Interactive (TTI) below 2 seconds, and this reduction in payload effort was one small part of that. Reducing the number of http traffic calls, expanding browser caching, optimizing code, and other web performance best practices all contributed.
    • Admin
      Admin over 2 years
      Clearly we just need middle-out