"Curl : (33) HTTP server doesn't seem to support byte ranges. Cannot resume."
Solution 1
I tried running this command twice:
curl -L -C - 'http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/ne_10m_admin_0_sovereignty.zip' -o countries.zip
and got the following output:
$ curl -L -C - 'http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/ne_10m_admin_0_sovereignty.zip' -o countries.zip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 5225k 100 5225k 0 0 720k 0 0:00:07 0:00:07 --:--:-- 836k
$ curl -L -C - 'http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/ne_10m_admin_0_sovereignty.zip' -o countries.zip
** Resuming transfer from byte position 5351381
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
$ echo $?
0
So, it looks like the "resume" is working fine. Since you posted the question back in May, it is entirely possible that cURL has fixed their bug or that the webserver in question has updated their support for HTTP range requests.
As you noted in the comments, the bug still occurs with the ngdc.noaa.gov website. I checked my curl and it is doing the same thing. Therefore the bug is still in curl.
With Wireshark I checked what is going on in the HTTP protocol. Basically, when curl makes the request to resume the completed file, the server sends back an HTTP 416 error ("Requested Range Not Satisfiable"). In the case of naturalearthdata.com, the CDN they use adds a Content-Range header specifying the exact length of the file. ngdc.noaa.gov does not add this header. Note that the addition of Content-Range in HTTP 416 responses is optional per RFC 2616.
curl uses Content-Range to determine if the download is complete. If the header is missing, curl assumes that the server doesn't support range downloads and spits out that error message.
I've reported this as a bug to the libcurl mailing list. We'll see what they say. In the meantime, here are two possible workarounds:
- Use a different downloader. I use
aria2c
often, which is a very nice command-line download utility with support for multiple connections and resumed downloads. It may make your downloads faster by utilizing more of your connection (assuming the server supports it), and I've checked that aria2c doesn't suffer from the same bug as curl. - Use
curl -I <URL> | grep Content-Length | cut -d' ' -f 2
to obtain the length of the file, and check that against your downloaded file size, before runningcurl
.
Solution 2
If the web server doesn't support requests for specific byte ranges you can't use -C and that appears to be the case with that host.
Solution 3
As a workaround, first check that the target file doesn't exist.
Hugolpz
Educational platform Engineer at Center for Research and Interdisciplinarity, Paris. Former PhD candidate in Chinese Teaching and Computer Assisted Language Learning (#CALL), enthusiast wikipedian. I mainly discuss a HTML/CSS/JS for #webapp, #Nodejs, #D3js, #Make, #GIS cartography, #topojson, #wikidata, #openedx. Love to ask short, clean questions on isolated issue with JSfiddle to demo it.
Updated on June 07, 2022Comments
-
Hugolpz almost 2 years
Given an online file which I can download via my web browser.
I run a
curl
on it, withmkdir -p ./data curl -L -C - 'http://www.ngdc.noaa.gov/mgg/global/relief/ETOPO1/data/ice_surface/grid_registered/netcdf/readme_etopo1_netcdf.txt' -o ./data/countries.zip
I optain the following error message:
curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume.
How to fix that ? Other downloading tools welcome.
Note:
-L
: follows redirects-C -
: continues previously unfinished download
Edit: this error message appears when the file to download already exist AND is already complete. It also stop the ongoing script. My requirement are :
- if the file doesn't exist, then download.
- if the file does exist but incomplete, continue the download where it stopped.
- if the file does exist and is complete, pass silently to next command. (no fail)
How could I do so ?
-
Hugolpz almost 10 yearsWas working yesterday. It my be related to my new location and new wifi.
-
darnir almost 10 yearsUnless you're actually trying to continue an existing download, this looks like a bug to me and you should submit a bug report to the guys at cUrl. Even if you're trying to continue an existing download, it should, in theory fallback to downloading the complete file if the server does not support the Range header.
-
Hugolpz over 9 years@darnir: after check, it bug when the download target already exist and is complete. The trouble is, this error it completely stop my longer makefile workflow.
-
Hugolpz about 9 yearsExisting file will not tell you if complete or incomplete, which is the cause of the bug.
-
PabloCocko about 9 yearsYes, it's not the bug explanation but a quick workaround to solve the problem and continue.