Read timeout using either urllib2 or any other http library

17,215

Solution 1

It's not possible for any library to do this without using some kind of asynchronous timer through threads or otherwise. The reason is that the timeout parameter used in httplib, urllib2 and other libraries sets the timeout on the underlying socket. And what this actually does is explained in the documentation.

SO_RCVTIMEO

Sets the timeout value that specifies the maximum amount of time an input function waits until it completes. It accepts a timeval structure with the number of seconds and microseconds specifying the limit on how long to wait for an input operation to complete. If a receive operation has blocked for this much time without receiving additional data, it shall return with a partial count or errno set to [EAGAIN] or [EWOULDBLOCK] if no data is received.

The bolded part is key. A socket.timeout is only raised if not a single byte has been received for the duration of the timeout window. In other words, this is a timeout between received bytes.

A simple function using threading.Timer could be as follows.

import httplib
import socket
import threading

def download(host, path, timeout = 10):
    content = None
    
    http = httplib.HTTPConnection(host)
    http.request('GET', path)
    response = http.getresponse()
    
    timer = threading.Timer(timeout, http.sock.shutdown, [socket.SHUT_RD])
    timer.start()
    
    try:
        content = response.read()
    except httplib.IncompleteRead:
        pass
        
    timer.cancel() # cancel on triggered Timer is safe
    http.close()
    
    return content

>>> host = 'releases.ubuntu.com'
>>> content = download(host, '/15.04/ubuntu-15.04-desktop-amd64.iso', 1)
>>> print content is None
True
>>> content = download(host, '/15.04/MD5SUMS', 1)
>>> print content is None
False

Other than checking for None, it's also possible to catch the httplib.IncompleteRead exception not inside the function, but outside of it. The latter case will not work though if the HTTP request doesn't have a Content-Length header.

Solution 2

I found in my tests (using the technique described here) that a timeout set in the urlopen() call also effects the read() call:

import urllib2 as u
c = u.urlopen('http://localhost/', timeout=5.0)
s = c.read(1<<20)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/socket.py", line 380, in read
    data = self._sock.recv(left)
  File "/usr/lib/python2.7/httplib.py", line 561, in read
    s = self.fp.read(amt)
  File "/usr/lib/python2.7/httplib.py", line 1298, in read
    return s + self._file.read(amt - len(s))
  File "/usr/lib/python2.7/socket.py", line 380, in read
    data = self._sock.recv(left)
socket.timeout: timed out

Maybe it's a feature of newer versions? I'm using Python 2.7 on a 12.04 Ubuntu straight out of the box.

Solution 3

One possible (imperfect) solution is to set the global socket timeout, explained in more detail here:

import socket
import urllib2

# timeout in seconds
socket.setdefaulttimeout(10)

# this call to urllib2.urlopen now uses the default timeout
# we have set in the socket module
req = urllib2.Request('http://www.voidspace.org.uk')
response = urllib2.urlopen(req)

However, this only works if you're willing to globally modify the timeout for all users of the socket module. I'm running the request from within a Celery task, so doing this would mess up timeouts for the Celery worker code itself.

I'd be happy to hear any other solutions...

Solution 4

I'd expect this to be a common problem, and yet - no answers to be found anywhere... Just built a solution for this using timeout signal:

import urllib2
import socket

timeout = 10
socket.setdefaulttimeout(timeout)

import time
import signal

def timeout_catcher(signum, _):
    raise urllib2.URLError("Read timeout")

signal.signal(signal.SIGALRM, timeout_catcher)

def safe_read(url, timeout_time):
    signal.setitimer(signal.ITIMER_REAL, timeout_time)
    url = 'http://uberdns.eu'
    content = urllib2.urlopen(url, timeout=timeout_time).read()
    signal.setitimer(signal.ITIMER_REAL, 0)
    # you should also catch any exceptions going out of urlopen here,
    # set the timer to 0, and pass the exceptions on.

The credit for the signal part of the solution goes here btw: python timer mystery

Solution 5

Any asynchronous network library should allow to enforce the total timeout on any I/O operation e.g., here's gevent code example:

#!/usr/bin/env python2
import gevent
import gevent.monkey # $ pip install gevent
gevent.monkey.patch_all()

import urllib2

with gevent.Timeout(2): # enforce total timeout
    response = urllib2.urlopen('http://localhost:8000')
    encoding = response.headers.getparam('charset')
    print response.read().decode(encoding)

And here's asyncio equivalent:

#!/usr/bin/env python3.5
import asyncio
import aiohttp # $ pip install aiohttp

async def fetch_text(url):
    response = await aiohttp.get(url)
    return await response.text()

text = asyncio.get_event_loop().run_until_complete(
    asyncio.wait_for(fetch_text('http://localhost:8000'), timeout=2))
print(text)

The test http server is defined here.

Share:
17,215

Related videos on Youtube

Björn Lindqvist
Author by

Björn Lindqvist

Hello, my name is Björn Lindqvist I'm a consultant. Here is my resume: http://www.bjornlindqvist.se/ I am one of those people who protests a lot against all the closing of perfectly valid and useful questions on SO. Smart people are nice. The world would be a better place if there where more meritocracies in it.

Updated on June 06, 2022

Comments

  • Björn Lindqvist
    Björn Lindqvist almost 2 years

    I have code for reading an url like this:

    from urllib2 import Request, urlopen
    req = Request(url)
    for key, val in headers.items():
        req.add_header(key, val)
    res = urlopen(req, timeout = timeout)
    # This line blocks
    content = res.read()
    

    The timeout works for the urlopen() call. But then the code gets to the res.read() call where I want to read the response data and the timeout isn't applied there. So the read call may hang almost forever waiting for data from the server. The only solution I've found is to use a signal to interrupt the read() which is not suitable for me since I'm using threads.

    What other options are there? Is there a HTTP library for Python that handles read timeouts? I've looked at httplib2 and requests and they seem to suffer the same issue as above. I don't want to write my own nonblocking network code using the socket module because I think there should already be a library for this.

    Update: None of the solutions below are doing it for me. You can see for yourself that setting the socket or urlopen timeout has no effect when downloading a large file:

    from urllib2 import urlopen
    url = 'http://iso.linuxquestions.org/download/388/7163/http/se.releases.ubuntu.com/ubuntu-12.04.3-desktop-i386.iso'
    c = urlopen(url)
    c.read()
    

    At least on Windows with Python 2.7.3, the timeouts are being completely ignored.

    • jfs
      jfs about 8 years
      related to total connection timeout: HTTPConnection.request not respecting timeout?
    • speedplane
      speedplane over 2 years
      Does this issue effect Python 3 as well? Have any steps been made to address it? Seems like an issue with the built-in Python HTTP library itself.
  • Björn Lindqvist
    Björn Lindqvist about 12 years
    I think you misunderstand. The urlopen() call connects to the server successfully, but then the program hangs at the read() call because the server returns the data to slowly. That is where the timeout is needed.
  • Björn Lindqvist
    Björn Lindqvist over 10 years
    But does it timeout the read() call or the urlopen() one? I'd like to test this solution, but it's pretty hard to setup a situation in which the server timeouts during the clients recv call on the socket.
  • kolinko
    kolinko over 10 years
    Bjorn, as for the read vs urlopen - it timeouts both read, and the urlopen. I tested it with this url: "uberdns.eu" - which, at least yesterday, caused my crawler to hang on read. This is the solution that I tested and worked where both socket default timeout, and urlopen timeout failed.
  • kolinko
    kolinko over 10 years
    As for the threads - no idea, you'd have to check the setitimer documentation.
  • Björn Lindqvist
    Björn Lindqvist over 10 years
    At least on Windows with Python 2.7 it has no effect on the read() call.
  • jfs
    jfs over 8 years
    You don't need lambda here:Timer(timeout, sock.shutdown, [socket.SHUT_RDWR]). You should raise TimeoutError on timeout instead of returning None.
  • Admin
    Admin over 8 years
    @J.F.Sebastian Yep, there are numerous ways to signal timeout here, such as raising a custom exception. Thanks for the args tip.
  • jfs
    jfs over 8 years
    There are preferable ways to signal the timeout: the download() function may be buried several stack frames down from the place that sets its parameters, the timeout can be triggered only for certain sites at certain times -- what do you expect intermediate functions to do if content is None? If even one place forgets to handle the error return value; it may have undesirable side-effects. Exceptions are the mechanism that delivers the error from the place where it is detected to the place where it is known what to do with it. And the default behavior (errors are not ignored) is more robust.
  • jfs
    jfs over 8 years
    setdefaulttimeout() does not limit the total read timeout e.g., the server may send a byte every 5 seconds and the timeout never triggers.
  • jfs
    jfs over 8 years
    it may trigger the timeout for individual .recv() calls (that may return partial data) but it does not limit the total read timeout (until EOF).
  • jfs
    jfs over 8 years
    btw, as far as I can tell, your answer is the only one that does limit the total read timeout (you should probably pass timeout parameter to HTTPConnection to try to limit the connection timeout too).
  • jfs
    jfs over 8 years
    the absence of class TimeoutError(EnvironmentError): pass is not the reason to promote bad practice.
  • Alfe
    Alfe over 8 years
    Yes, that clarification has its value.
  • Evan Jensen
    Evan Jensen over 5 years
    This works great (the gevent snippet at least). I have a simple program to grab an image and store it with timestamp, and this did the job of letting the program end if the url is unavailable. Thanks!