Python requests, how to limit received size, transfer rate, and/or total time?

22,162

You could try setting stream=True, then aborting a request when your time or size limits are exceeded while you read the data in chunks.

As of requests release 2.3.0 the timeout applies to streaming requests too, so all you need to do is allow for a timeout for the initial connection and each iteration step:

r = requests.get(..., stream=True, timeout=initial_timeout)
r.raise_for_status()

if int(r.headers.get('Content-Length')) > your_maximum:
    raise ValueError('response too large')

size = 0
start = time.time()

for chunk in r.iter_content(1024):
    if time.time() - start > receive_timeout:
        raise ValueError('timeout reached')

    size += len(chunk)
    if size > your_maximum:
        raise ValueError('response too large')

    # do something with chunk

Adjust the timeout as needed.

For requests releases < 2.3.0 (which included this change) you could not time out the r.iter_content() yield; a server that stops responding in the middle of a chunk would still tie up the connection. You'd have to wrap the above code in an additional timeout function to cut off long-running responses early.

Share:
22,162
edA-qa mort-ora-y
Author by

edA-qa mort-ora-y

I'm an experienced programmer who has worked in numerous domains such graphic rendering, telecommunications, scientific applications, business processes, video games, financial platforms, and development products. For more about programming visit Musing Mortoray, or check out my live stream. If you’d like private mentoring check out my profile at Codementor. I’m the creator of the Leaf programming language.

Updated on April 08, 2021

Comments

  • edA-qa mort-ora-y
    edA-qa mort-ora-y about 3 years

    My server does external requests and I'd like to limit the damage a failing request can do. I'm looking to cancel the request in these situations:

    • the total time of the request is over a certain limit (even if data is still arriving)
    • the total received size exceeds some limit (I need to cancel prior to accepting more data)
    • the transfer speed drops below some level (though I can live without this one if a total time limit can be provided)

    Note I am not looking for the timeout parameter in requests, as this is a timeout only for inactivity. I'm unable to find anything to do with a total timeout, or a way to limit the total size. One example shows a maxsize parameter on HTTPAdapter but that is not documented.

    How can I achieve these requirements using requests?

    • Martijn Pieters
      Martijn Pieters about 10 years
      maxsize is a limit on the connection pool, I think, not on recieved size.
    • Valentin Lorentz
      Valentin Lorentz about 10 years
      Not a solution, but you should also make sure that size limit also take account of the size of the headers, which some libraries (like urllib) don't.
    • edA-qa mort-ora-y
      edA-qa mort-ora-y about 10 years
      @ValentinLorentz, yes, indeed I'd want a much lower size limit on the headers than the content.
    • Hieu
      Hieu about 10 years
      about total timeout, you might like to have a look at my answer to a similar question: stackoverflow.com/a/22377499/1653521
  • zx81
    zx81 almost 9 years
    A small suggestion would be to increment the received content as each chunk arrives, as you did in your other answer. +1
  • Martijn Pieters
    Martijn Pieters almost 9 years
    @zx81: that is what the do something with chunk comment is about; you don't have to collect all content into one big string, you could also process it iteratively.
  • zx81
    zx81 almost 9 years
    @MartijnPieters Yes, I saw that. It was just a suggestion to make the code more immediately useful to the average passerby. No worries though, they can read the comments. :) Best wishes
  • chander
    chander almost 3 years
    It should be noted that unless you are (a) writing the data to disk or (b) processing the streamed data in memory (as it streams), it's likely more performant to set the chunk size to the maximum chunk size you allow. Reading in small chunk sizes will be significantly slower, and the end result is the data stored in memory anyways.