How do I make cURL use keepalive from the command line?
Solution 1
curl already uses keepalive by default.
As an example:
curl -v http://www.google.com http://www.google.com
Produces the following:
* About to connect() to www.google.com port 80 (#0)
* Trying 74.125.39.99... connected
* Connected to www.google.com (74.125.39.99) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15
> Host: www.google.com
> Accept: */*
>
< HTTP/1.1 302 Found
< Location: http://www.google.ch/
< Cache-Control: private
< Content-Type: text/html; charset=UTF-8
< Set-Cookie: PREF=ID=0dd153a227433b2f:FF=0:TM=1289232886:LM=1289232886:S=VoXSLP8XWvjzNcFj; expires=Wed, 07-Nov-2012 16:14:46 GMT; path=/; domain=.google.com
< Set-Cookie: NID=40=sOJuv6mxhQgqXkVEOzBwpUFU3YLPQYf4HRcySE1veCBV5cPtP3OiLPKqvRxL10VLiFETGz7cu25pD_EoUq1f_CkNwOna-xRcFFsCokiFqIbGPrb6DmUO7XhcpMYOt3dB; expires=Tue, 10-May-2011 16:14:46 GMT; path=/; domain=.google.com; HttpOnly
< Date: Mon, 08 Nov 2010 16:14:46 GMT
< Server: gws
< Content-Length: 218
< X-XSS-Protection: 1; mode=block
<
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
<A HREF="http://www.google.ch/">here</A>.
</BODY></HTML>
* Connection #0 to host www.google.com left intact
* Re-using existing connection! (#0) with host www.google.com
* Connected to www.google.com (74.125.39.99) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15
> Host: www.google.com
> Accept: */*
>
< HTTP/1.1 302 Found
< Location: http://www.google.ch/
< Cache-Control: private
< Content-Type: text/html; charset=UTF-8
< Set-Cookie: PREF=ID=8b531815cdfef717:FF=0:TM=1289232886:LM=1289232886:S=ifbAe1QBX915QGHr; expires=Wed, 07-Nov-2012 16:14:46 GMT; path=/; domain=.google.com
< Set-Cookie: NID=40=Rk86FyMCV3LzorQ1Ph8g1TV3f-h41NA-9fP6l7G-441pLEiciG9k8L4faOGC0VI6a8RafpukiDvaNvJqy8wExED9-Irzs7VdUQYwI8bCF2Kc2ivskb6KDRDkWzMxW_xG; expires=Tue, 10-May-2011 16:14:46 GMT; path=/; domain=.google.com; HttpOnly
< Date: Mon, 08 Nov 2010 16:14:46 GMT
< Server: gws
< Content-Length: 218
< X-XSS-Protection: 1; mode=block
<
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
<A HREF="http://www.google.ch/">here</A>.
</BODY></HTML>
* Connection #0 to host www.google.com left intact
* Closing connection #0
This snippet:
* Connection #0 to host www.google.com left intact * Re-using existing connection! (#0) with host www.google.com
Indicates it re-used the same connection.
Use the same "curl -v http://my.server/url1 http://my.server/url2
" invocation against your server and check that you see the same message.
Consider using tcpdump instead of netstat to see how the packets are handled. netstat will only give you a momentary glimpse of what's happening, whereas with tcpdump you'll see every single packet involved. Another option is Wireshark.
Solution 2
If your server allows 'KeepAlive On', you can use telnet to keep a persistent connection going like so:
$ while :;do echo -e "GET / HTTP/1.1\nhost: $YOUR_VIRTUAL_HOSTNAME\n\n";sleep 1;done|telnet $YOUR_SERVERS_IP 80
Solution 3
One way to test HTTP persistent connection/Keep-Alive is to see if the TCP connection is reused for subsequent connections.
For example. I have a file containing link of http://google.com repeated multiple times.
Running below command will open http://google.com multiple times with the same TCP connection.
curl -K /tmp/file
And during this time if you netstat you can find that the TCP connection has not changes and the older one is resued (The socket remains the same).
$ sudo netstat -pnt|grep curl
tcp 0 0 106.51.85.118:48682 74.125.236.69:80 ESTABLISHED 9732/curl
$ sudo netstat -pnt|grep curl
tcp 0 0 106.51.85.118:48682 74.125.236.69:80 ESTABLISHED 9732/curl
$ sudo netstat -pnt|grep curl
tcp 0 0 106.51.85.118:48682 74.125.236.69:80 ESTABLISHED 9732/curl
But when we ask client to use HTTP 1.0 which dose not support persistent HTTP connection the socket address changes
$ curl -0 -K /tmp/file
$ sudo netstat -pnt|grep curl
tcp 0 0 106.51.85.118:48817 74.125.236.69:80 ESTABLISHED 9765/curl
$ sudo netstat -pnt|grep curl
tcp 0 0 106.51.85.118:48827 74.125.236.69:80 ESTABLISHED 9765/curl
$ sudo netstat -pnt|grep curl
tcp 0 74 106.51.85.118:48838 74.125.236.69:80 ESTABLISHED 9765/curl
from this we can be sure that the TCP connection is reused.
Solution 4
--keepalive-time
man curl... man.. :D
Comments
-
JPD over 1 year
I'm trying to verify that HTTP persistent connections are being used during communication with a Tomcat webserver I've got running. Currently, I can retrieve a resource on my server from a browser (e.g. Chrome) and verify using netstat that the connection is established:
# visit http://server:8080/path/to/resource in Chrome [server:/tmp]$ netstat -a ... tcp 0 0 server.mydomain:webcache client.mydomain:55502 ESTABLISHED
However, if I use curl, I never see the connection on the server in netstat.
[client:/tmp]$ curl --keepalive-time 60 --keepalive http://server:8080/path/to/resource ... [server:/tmp]$ netstat -a # no connection exists for client.mydomain
I've also tried using the following curl command:
curl -H "Keep-Alive: 60" -H "Connection: keep-alive" http://server:8080/path/to/resource
Here's my client machine's curl version:
[server:/tmp]$ curl -V curl 7.19.5 (x86_64-unknown-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5 libssh2/1.1 Protocols: tftp ftp telnet dict http file https ftps scp sftp Features: IDN IPv6 Largefile NTLM SSL libz
How do I get curl to use a persistent/keepalive connection? I've done quite a bit of Googling on the subject, but with no success. It should be noted that I've also used
links
on the client machine to retrieve the resource, and that does give me anESTABLISHED
connection on the server.Let me know if I need to provide more information.
-
Ciro Santilli Путлер Капут 六四事 about 8 years
-
-
JPD over 13 yearsI've read the man page, thanks. Did you not notice the
--keepalive-time 60
in my example? -
Arenstar over 13 yearsoh... i feel stupid now :(
-
Roshan over 13 yearsIf you only request a single URL via curl there's no reason for curl to keep anything alive. The curl process will terminate as soon as all URLs have been fetched. Specify two URLs (it could even be the same URL twice) and keep an eye on the output produced by "curl -v". By the time netstat runs the connection has already been closed as curl is no longer running and there no longer being a reason for the connection to be kept open.
-
JPD over 13 yearsThat makes sense; it wouldn't make sense to keep the connection lying around if the process owning it has finished. Thanks for your help.
-
ShabbyDoo over 8 yearsSpecifying "keepalive-time" as suggested above does not affect HTTP-level keep alive; it affects low-level TCP connectivity. From the man page (curl.haxx.se/docs/manpage.html): "This option sets the time a connection needs to remain idle before sending keepalive probes and the time between individual keepalive probes." It's nice that there are so many kinds of keep-alives from which to choose, I suppose ;)
-
Michael Ozeryansky over 8 yearsbeautiful. Beats a curl while loop by far.
-
Dave Gregory about 8 yearsI was trying to test whether my KeepAliveTimout change had been applied properly -- this was just the ticket. Thanks!
-
Chris about 3 yearsHow did you set up your file? Isn't -K a config file?
-
Chris about 3 yearsDoes curl allow multiple urls via cli?
-
dsteinkopf about 3 yearsIs it possible to give the different url calls different headers? eg. the first one with a referer and the second one without? I am asking becaus --next kills the connection.
-
Genzer over 2 years@Chris, I believed @kannan-mohan mentioned it by " I have a file containing link of google.com repeated multiple times.". Reading
man curl
, you could create the file containing multiple lines ofurl = "https://google.com
.