Content-Length header versus chunked encoding

45,476

Solution 1

Use Content-Length, definitely. The server utilization from this will be almost nonexistent and the benefit to your users will be large.

For dynamic content, it's also quite simple to add compressed response support (gzip). That requires output buffering, which in turn gives you the content length. (not practical with file downloads or already compressed content (sound,images)).

Consider also adding support for partial content/byte-range serving - that is, capability to restart downloads. See here for a byte-range example (the example is in PHP, but is applicable in any language). You need Content-Length when serving partial content.

Of course, those are not silver bullets: for streaming media, it's pointless to use output buffering or response size; for large files, output buffering doesn't make sense, but Content-Length and byte serving makes a lot of sense (restarting a failed download is possible).

Personally, I serve Content-Length whenever I know it; for file download, checking the filesize is insignificant in terms of resources. Result: user has a determinate progress bar (and dynamic pages download faster thanks to gzip).

Solution 2

If the content length is known beforehand, then I would certainly prefer it above sending in chunks. If there's means of static files at the local disk file system or in a database, then any self-respected programming language and RDBMS provides ways to get the content length beforehand. You should make use of it.

On the other hand, if the content length is really unpredictable beforehand (e.g. when your intent is to zip several files together and send it as one), then sending it in chunks may be faster than buffering it in server's memory or writing to local disk file system first. But this indeed impacts the user experience negatively because the download progress is unknown. The impatient may then abort the download and move along.

Another benefit of knowing the content length beforehand is the ability to resume downloads. I see in your post history that your main programming language is Java; you can find here an article with more technical background information and a Java Servlet example which does that.

Solution 3

Content-Length

The Content-Length header determines the byte length of the request/response body. If you neglect to specify the Content-Length header, HTTP servers will implicitly add a Transfer-Encoding: chunked header. The Content-Length and Transfer-Encoding header should not be used together. The receiver will have no idea what the length of the body is and cannot estimate the download completion time. If you do add a Content-Length header, make sure it matches the entire body in bytes, if it is incorrect, the behaviour of receivers is undefined.

The Content-Length header will not allow streaming, but it is useful for large binary files, where you want to support partial content serving. This basically means resumable downloads, paused downloads, partial downloads, and multi-homed downloads. This requires the use of an additional header called Range. This technique is called Byte serving.

Transfer-Encoding

The use of Transfer-Encoding: chunked is what allows streaming within a single request or response. This means that the data is transmitted in a chunked manner, and does not impact the representation of the content.

Officially an HTTP client is meant to send a request with a TE header field that specifies what kinds of transfer encodings the client is willing to accept. This is not always sent, however most servers assume that clients can process chunked encodings.

The chunked transfer encoding makes better use of persistent TCP connections, which HTTP 1.1 assumes to be true by default.

Content-Encoding

It is also possible to compress chunked or non-chunked data. This is practically done via the Content-Encoding header.

Note that the Content-Length is equal to the length of the body after the Content-Encoding. This means if you have gzipped your response, then the length calculation happens after compression. You will need to be able to load the entire body in memory if you want to calculate the length (unless you have that information elsewhere).

When streaming using chunked encoding, the compression algorithm must also support online processing. Thankfully, gzip supports stream compression. I believe that the content gets compressed first, and then cut up in chunks. That way, the chunks are received, then decompressed to acquire the real content. If it were the other way around, you'll get the compressed stream, and then decompressing would give us chunks. Which doesn't make sense.

A typical compressed stream response may have these headers:

Content-Type: text/html
Content-Encoding: gzip
Transfer-Encoding: chunked

Semantically the usage of Content-Encoding indicates an "end to end" encoding scheme, which means only the final client or final server is supposed to decode the content. Proxies in the middle are not suppose to decode the content.

If you want to allow proxies in the middle to decode the content, the correct header to use is in fact the Transfer-Encoding header. If the HTTP request possessed a TE: gzip chunked header, then it is legal to respond with Transfer-Encoding: gzip chunked.

However this is very rarely supported. So you should only use Content-Encoding for your compression right now.

Chunked vs Store & Forward

Share:
45,476

Related videos on Youtube

Gandalf
Author by

Gandalf

Java, C++ coder - sometimes Oracle DBA. Spring Framework, Security, Tomcat, MongoDB, Lucene.

Updated on November 21, 2020

Comments

  • Gandalf
    Gandalf over 3 years

    I'm trying to weigh the pros and cons of setting the Content-Length HTTP header versus using chunked encoding to return [possibly] large files from my server. One or the other is needed to be compliant with HTTP 1.1 specs using persistent connections. I see the advantage of the Content-Length header being :

    • Download dialogs can show accurate progress bar
    • Client knows upfront if the file may/may not be too large for them to ingest

    The downside is having to calculate the size before you return the object which isn't always practical and could add to server/database utilization. The downside of chunked encoding is the small overhead of adding the chunk size before each chunk and the download progress bar. Any thoughts? Any other HTTP considerations for both methods that I may not have thought of?

    • james.garriss
      james.garriss over 12 years
      Is it a given that your content is static and its length is known a priori? If not, chunked would be much faster for large files.
  • BalusC
    BalusC over 14 years
    I don't see how byte range serving (basically: "resume downloads") is beneficial in this particular case. This namely requires that the content length is known beforehand. You can then just as good set the content length.
  • Piskvor left the building
    Piskvor left the building over 14 years
    @BalusC: Content-Length is a prerequisite for byte-serving. Typical use case: user is downloading a 10MB file over her WiFi connection, signal drops 7MB into the download. Without resume, she has to download the whole 10MB again, which is quite annoying for her; with resume, there's only 3 MB left to go. Most modern browsers support this.
  • BalusC
    BalusC over 14 years
    Yes, I know. Maybe you didn't understood me? I am just telling that I don't see how this is related to the "content-length" v.s. "transfer-encoding:chunked" question. By the way, the OP's post history tells me that his main language is Java, in that case this FileServlet example may be more useful: balusc.blogspot.com/2009/02/…
  • Piskvor left the building
    Piskvor left the building over 14 years
    @BalusC: last sentence of question: "Any other HTTP considerations for both methods that I may not have thought of?" When using Content-Length, it is possible to add this functionality; whereas with Transfer-Encoding: chunked, this is not possible.
  • BalusC
    BalusC over 14 years
    Yes, that's true. BTW: GZIP doesn't require output buffering. It's by default sent in chunked encoding. At least, in Java servletcontainers.
  • CMCDragonkai
    CMCDragonkai over 10 years
    Is there any way for the client to know how large the file is being downloaded without the server providing any details?
  • Piskvor left the building
    Piskvor left the building about 10 years
    @CMCDragonkai: No, it's all up to the server.
  • CMCDragonkai
    CMCDragonkai about 10 years
    In that case, can a client set a size limit? Can curl do so?
  • Piskvor left the building
    Piskvor left the building about 10 years
    @CMCDragonkai: Yes, the client can abort receiving the response while it's streaming from the server - note that you'll be left with a file which is probably incomplete. For cURL, the CURLOPT_PROGRESSFUNCTION option might be useful: php.net/manual/en/function.curl-setopt.php