Measure website load time with Python requests

69,939

Solution 1

As for your question, it should be the total time for

  1. time to create the request object
  2. Send request
  3. Receive response
  4. Parse response (See comment from Thomas Orozco )

Other ways to measure a single request load time is to use urllib:

nf = urllib.urlopen(url)
start = time.time()
page = nf.read()
end = time.time()
nf.close()
# end - start gives you the page load time

Solution 2

There is such functionality in latest version of requests:

https://requests.readthedocs.io/en/latest/api/?highlight=elapsed#requests.Response.elapsed

For example:

requests.get("http://127.0.0.1").elapsed.total_seconds()

Solution 3

response.elapsed returns a timedelta object with the time elapsed from sending the request to the arrival of the response. It is often used to stop the connection after a certain point of time is elapsed.

# import requests module 
import requests 
  
# Making a get request 
response = requests.get('http://stackoverflow.com/') 
  
# print response 
print(response) 
  
# print elapsed time 
print(response.elapsed)

output:

<Response [200]>
0:00:00.343720
Share:
69,939
cookM
Author by

cookM

Updated on July 09, 2022

Comments

  • cookM
    cookM almost 2 years

    I'm trying to build a tool for testing the delay of my internet connection, more specifically web site load times. I thought of using the python requests module for the loading part.

    Problem is, it's got no built-in functionality to measure the time it took to get the full response. For this I thought I would use the timeit module.

    What I'm not sure about is that if I run timeit like so:

    t = timeit.Timer("requests.get('http://www.google.com')", "import requests")
    

    I'm I really measuring the time it took the response to arrive or is it the time it takes for the request to be built, sent, received, etc? I'm guessing I could maybe disregard that excecution time since I'm testing networks with very long delays (~700ms)?

    Is there a better way to do this programatically?

  • Thomas Orozco
    Thomas Orozco almost 12 years
    + 4. parse the HTTP response
  • cookM
    cookM almost 12 years
    That looks really nice, but looking at one of the examples i see 1. start_timer = time.time() 2. Open Browser + Read Response 3. latency = time.time() - start_timer Would that be kind of the same problem?
  • pyfunc
    pyfunc almost 12 years
    @cookM: I did not see it as problem but a real time experience of what the request latency will be. In fact it averages over many requests which will be closer to a realistic time.
  • pyfunc
    pyfunc almost 12 years
    #cookM: The wiki has more details on profiling load limes: code.google.com/p/multi-mechanize/wiki/AdvancedScripts
  • cookM
    cookM almost 12 years
    @pyfunc Just saw your edit, I think that snippet is just what I was looking for. I'm not that familiar with urllib but I'm guessing that when I issue nf.read() what I'm doing is sending the request and getting it back right?
  • cookM
    cookM almost 12 years
    Nice! Seems like the wiki has a lot of useful information (duh). I'll give multi-mechanize a try. Thanks a lot for your help.
  • pyfunc
    pyfunc almost 12 years
    @cookM: Yes when you do nf.read() you are doing all the four mentioned above but for realistic load profile, I would suggest you give multi-mechanize a try. It is bit involved than the snippet but has real returns.
  • Janus Troelsen
    Janus Troelsen about 11 years
    urlopen seems to block until the headers come, so I'd put the start assignment before.
  • Michael Osl
    Michael Osl about 10 years
    to get the response time in seconds: requests.get("http://127.0.0.1").elapsed.total_seconds()
  • Admin
    Admin over 9 years
    I would say that this says very little about a website. A website isn't just the request response, but all the subsequent html and ajax requests.... What am I missing?
  • Luc
    Luc almost 8 years
    To add to Micael Osl's comment: total_seconds() is a decimal number which seems to have microsecond precision.
  • ChaimG
    ChaimG almost 8 years
    Do not use this for profiling and optimizing your code on the client side. It only measures the server's response time (which was the OPs question). The amount of time elapsed between sending the request and the arrival of the response (as a timedelta). This property specifically measures the time taken between sending the first byte of the request and finishing parsing the headers. It is therefore unaffected by consuming the response content or the value of the stream keyword argument. - the docs.
  • Gregorius Edwadr
    Gregorius Edwadr about 7 years
    Hi @pyfunc I just read your answer. It is possible to detail time for each of your point? Like how much time does it take to send the request, how much time does it take to for the server the process it, etc.
  • agent nate
    agent nate almost 7 years
    That link seems to be infected. Takes you to a website advertising: "How, Where And Whether To Buy Instagram Followers"
  • Heinz
    Heinz over 6 years
    @GvS what about Python 3 and urllib3?