Fastest parallel requests in Python

43,371

Solution 1

Instead of using multithreading or asyncio.executor, you should use aiohttp instead, which is the equivalent of requests but with asynchronous support.

import asyncio
import aiohttp
import time

websites = """https://www.youtube.com
https://www.facebook.com
https://www.baidu.com
https://www.yahoo.com
https://www.amazon.com
https://www.wikipedia.org
http://www.qq.com
https://www.google.co.in
https://www.twitter.com
https://www.live.com
http://www.taobao.com
https://www.bing.com
https://www.instagram.com
http://www.weibo.com
http://www.sina.com.cn
https://www.linkedin.com
http://www.yahoo.co.jp
http://www.msn.com
http://www.uol.com.br
https://www.google.de
http://www.yandex.ru
http://www.hao123.com
https://www.google.co.uk
https://www.reddit.com
https://www.ebay.com
https://www.google.fr
https://www.t.co
http://www.tmall.com
http://www.google.com.br
https://www.360.cn
http://www.sohu.com
https://www.amazon.co.jp
http://www.pinterest.com
https://www.netflix.com
http://www.google.it
https://www.google.ru
https://www.microsoft.com
http://www.google.es
https://www.wordpress.com
http://www.gmw.cn
https://www.tumblr.com
http://www.paypal.com
http://www.blogspot.com
http://www.imgur.com
https://www.stackoverflow.com
https://www.aliexpress.com
https://www.naver.com
http://www.ok.ru
https://www.apple.com
http://www.github.com
http://www.chinadaily.com.cn
http://www.imdb.com
https://www.google.co.kr
http://www.fc2.com
http://www.jd.com
http://www.blogger.com
http://www.163.com
http://www.google.ca
https://www.whatsapp.com
https://www.amazon.in
http://www.office.com
http://www.tianya.cn
http://www.google.co.id
http://www.youku.com
https://www.example.com
http://www.craigslist.org
https://www.amazon.de
http://www.nicovideo.jp
https://www.google.pl
http://www.soso.com
http://www.bilibili.com
http://www.dropbox.com
http://www.xinhuanet.com
http://www.outbrain.com
http://www.pixnet.net
http://www.alibaba.com
http://www.alipay.com
http://www.chrome.com
http://www.booking.com
http://www.googleusercontent.com
http://www.google.com.au
http://www.popads.net
http://www.cntv.cn
http://www.zhihu.com
https://www.amazon.co.uk
http://www.diply.com
http://www.coccoc.com
https://www.cnn.com
http://www.bbc.co.uk
https://www.twitch.tv
https://www.wikia.com
http://www.google.co.th
http://www.go.com
https://www.google.com.ph
http://www.doubleclick.net
http://www.onet.pl
http://www.googleadservices.com
http://www.accuweather.com
http://www.googleweblight.com
http://www.answers.yahoo.com"""


async def get(url, session):
    try:
        async with session.get(url=url) as response:
            resp = await response.read()
            print("Successfully got url {} with resp of length {}.".format(url, len(resp)))
    except Exception as e:
        print("Unable to get url {} due to {}.".format(url, e.__class__))


async def main(urls):
    async with aiohttp.ClientSession() as session:
        ret = await asyncio.gather(*[get(url, session) for url in urls])
    print("Finalized all. Return is a list of len {} outputs.".format(len(ret)))


urls = websites.split("\n")
start = time.time()
asyncio.run(main(urls))
end = time.time()

print("Took {} seconds to pull {} websites.".format(end - start, len(urls)))

Outputs:

Successfully got url http://www.msn.com with resp of length 47967.
Successfully got url http://www.google.com.br with resp of length 14823.
Successfully got url https://www.t.co with resp of length 0.
Successfully got url http://www.google.es with resp of length 14798.
Successfully got url https://www.wikipedia.org with resp of length 66691.
Successfully got url http://www.google.it with resp of length 14805.
Successfully got url http://www.googleadservices.com with resp of length 1561.
Successfully got url http://www.cntv.cn with resp of length 3232.
Successfully got url https://www.example.com with resp of length 1256.
Successfully got url https://www.google.co.uk with resp of length 14184.
Successfully got url http://www.accuweather.com with resp of length 269.
Successfully got url http://www.google.ca with resp of length 14172.
Successfully got url https://www.facebook.com with resp of length 192898.
Successfully got url https://www.apple.com with resp of length 75422.
Successfully got url http://www.gmw.cn with resp of length 136136.
Successfully got url https://www.google.ru with resp of length 14803.
Successfully got url https://www.bing.com with resp of length 70314.
Successfully got url http://www.googleusercontent.com with resp of length 1561.
Successfully got url https://www.tumblr.com with resp of length 37500.
Successfully got url http://www.googleweblight.com with resp of length 1619.
Successfully got url https://www.google.co.in with resp of length 14230.
Successfully got url http://www.qq.com with resp of length 101957.
Successfully got url http://www.xinhuanet.com with resp of length 113239.
Successfully got url https://www.twitch.tv with resp of length 105014.
Successfully got url http://www.google.co.id with resp of length 14806.
Successfully got url https://www.linkedin.com with resp of length 90047.
Successfully got url https://www.google.fr with resp of length 14777.
Successfully got url https://www.google.co.kr with resp of length 14797.
Successfully got url http://www.google.co.th with resp of length 14783.
Successfully got url https://www.google.pl with resp of length 14769.
Successfully got url http://www.google.com.au with resp of length 14228.
Successfully got url https://www.whatsapp.com with resp of length 84551.
Successfully got url https://www.google.de with resp of length 14767.
Successfully got url https://www.google.com.ph with resp of length 14196.
Successfully got url https://www.cnn.com with resp of length 1135447.
Successfully got url https://www.wordpress.com with resp of length 216637.
Successfully got url https://www.twitter.com with resp of length 61869.
Successfully got url http://www.alibaba.com with resp of length 282210.
Successfully got url https://www.instagram.com with resp of length 20776.
Successfully got url https://www.live.com with resp of length 36621.
Successfully got url https://www.aliexpress.com with resp of length 37388.
Successfully got url http://www.uol.com.br with resp of length 463614.
Successfully got url https://www.microsoft.com with resp of length 230635.
Successfully got url http://www.pinterest.com with resp of length 87012.
Successfully got url http://www.paypal.com with resp of length 103763.
Successfully got url https://www.wikia.com with resp of length 237977.
Successfully got url http://www.sina.com.cn with resp of length 530525.
Successfully got url https://www.amazon.de with resp of length 341222.
Successfully got url https://www.stackoverflow.com with resp of length 190878.
Successfully got url https://www.ebay.com with resp of length 263256.
Successfully got url http://www.diply.com with resp of length 557848.
Successfully got url http://www.office.com with resp of length 111909.
Successfully got url http://www.imgur.com with resp of length 6223.
Successfully got url https://www.amazon.co.jp with resp of length 417751.
Successfully got url http://www.outbrain.com with resp of length 54481.
Successfully got url https://www.amazon.co.uk with resp of length 362057.
Successfully got url http://www.chrome.com with resp of length 223832.
Successfully got url http://www.popads.net with resp of length 14517.
Successfully got url https://www.youtube.com with resp of length 571028.
Successfully got url http://www.doubleclick.net with resp of length 130244.
Successfully got url https://www.yahoo.com with resp of length 510721.
Successfully got url http://www.tianya.cn with resp of length 7619.
Successfully got url https://www.netflix.com with resp of length 422277.
Successfully got url https://www.naver.com with resp of length 210175.
Successfully got url http://www.blogger.com with resp of length 94478.
Successfully got url http://www.soso.com with resp of length 5816.
Successfully got url http://www.github.com with resp of length 212285.
Successfully got url https://www.amazon.com with resp of length 442097.
Successfully got url http://www.go.com with resp of length 598355.
Successfully got url http://www.chinadaily.com.cn with resp of length 102857.
Successfully got url http://www.sohu.com with resp of length 216027.
Successfully got url https://www.amazon.in with resp of length 417175.
Successfully got url http://www.answers.yahoo.com with resp of length 104628.
Successfully got url http://www.jd.com with resp of length 18217.
Successfully got url http://www.blogspot.com with resp of length 94478.
Successfully got url http://www.fc2.com with resp of length 16997.
Successfully got url https://www.baidu.com with resp of length 301922.
Successfully got url http://www.craigslist.org with resp of length 59438.
Successfully got url http://www.imdb.com with resp of length 675494.
Successfully got url http://www.yahoo.co.jp with resp of length 37036.
Successfully got url http://www.onet.pl with resp of length 854384.
Successfully got url http://www.dropbox.com with resp of length 200591.
Successfully got url http://www.zhihu.com with resp of length 50543.
Successfully got url http://www.yandex.ru with resp of length 174347.
Successfully got url http://www.ok.ru with resp of length 206604.
Successfully got url http://www.163.com with resp of length 588036.
Successfully got url http://www.bbc.co.uk with resp of length 303267.
Successfully got url http://www.nicovideo.jp with resp of length 116124.
Successfully got url http://www.pixnet.net with resp of length 6448.
Successfully got url http://www.bilibili.com with resp of length 96941.
Successfully got url https://www.reddit.com with resp of length 718393.
Successfully got url http://www.booking.com with resp of length 472655.
Successfully got url https://www.360.cn with resp of length 79943.
Successfully got url http://www.taobao.com with resp of length 384755.
Successfully got url http://www.youku.com with resp of length 326873.
Successfully got url http://www.coccoc.com with resp of length 64687.
Successfully got url http://www.tmall.com with resp of length 137527.
Successfully got url http://www.hao123.com with resp of length 331222.
Successfully got url http://www.weibo.com with resp of length 93712.
Successfully got url http://www.alipay.com with resp of length 24057.
Finalized all. Return is a list of len 100 outputs.
Took 3.9256999492645264 seconds to pull 100 websites.

As you can see 100 websites from across the world were successfully reached (with or without https) in about 4 seconds with aiohttp on my internet connection (Miami, Florida). Keep in mind the following can slow down the program by a few ms:

  • print statements (yes, including the ones placed in the code above).
  • Reaching servers further away from your geographical location.

The example above has both instances of the above, and therefore it is arguably the least-optimized way of doing what you have asked. However, I do believe it is a great start for what you are looking for.


Edit: April 6th, 2021

Please note that in the above code we are querying multiple (different) servers, and therefore the use of a single ClientSession might degrade performance:

Session encapsulates a connection pool (connector instance) and supports keepalives by default. Unless you are connecting to a large, unknown number of different servers over the lifetime of your application, it is suggested you use a single session for the lifetime of your application to benefit from connection pooling. (reference).

If your plan is to query an n amount of known servers defaulting to a single ClientSession is probably best. I've modified the answer to use a single ClientSession since it's my belief that most folks finding use for this answer won't be querying different (unknown) servers at once, but this is worth to keep in mind in case you have are doing what the OP originally asked for.

Solution 2

Q: Fastest parallel requests in Python

I cannot waste 1 millisecond

One can easily spend 5x more time on doing the same amount of work, if bad approach was selected. Check the [ Epilogue ] section below as as to see one such exemplified code ( an MCVE-example ), where any of the Threads and/or Processes were way slower, than a pure [SERIAL]-form of the process-execution. So indeed a due care will be necessary here and in every real-world use-case.


  • Async using asyncio: I do not want to rely on a single thread, for some reason it may get stucked.

  • Threads: Is it really reliable on Python to use threads? Do I have the risk of 1 thread make
    other get stucked?

  • Multiprocesses: If a have on process controlling the others, would I loose to much time in interprocess communication?

The long story short :

HFT/Trading may benefit from an intentionally restricted-duration asyncio code, as was in detail demonstrated below, so as to benefit from transport-latency masking ( interleaved progress of execution, due to having to still wait for a delivery of a remote-processing results - so can do some useful work in the meantime, letting the I/O-related waiting threads stay idle and handling some other work in the meantime ). Computing heavy tasks or tight, the less very tight request/response-behavioural patterns will not be able to use this, right due to computing intesive nature ( no reason there to go idle at all, so no beneficial CPU-releases will ever happen ) or due to having a need to avoid any ( potentially deteriorating ) in-determinism in code-execution tight response time-window.

Threads are an a priori lost game in standard python interpreter. The central GIL-lock stepping enforces a pure-[SERIAL] code execution, one-after-another( round-robin scheduling ordered ) as explained here and interactively demonstrated ( here + code included ) - click + to zoom, until you see 1-tick per pixel resolution, and you will see how often other cores go and try to ask for GIL-lock acquisition and fail to get it, and you will also never see more than a one-and-only-one green-field of a CPU-execution in any column, so a pure-[SERIAL]-code execution happens even in a crowd of python-threads ( the real-time goes to the right in the graphs ).

Processes-based multiprocessing is quite expensive tool, yet gives one a way, how to escape from the trap of the GIL-lock internally [SERIAL]-ised python flow of processing. Inter-process communication is expensive, if performed using the standard multiprocessing.Queue, but HFT/trading platforms may enjoy much faster / lower latency tools for truly distributed, multi-host, performance-motivated designs. Details go beyond this format, yet after tens of years using microseconds-shaving for ultimate response robustness and latency minimisation for such a distributed-computing trading system.

The Computer Science has taught me a lot lessons on doing this right.

From a pure Computer-Science point of view, the approach to the problem ( a solution not being a parallel in its nature ) proposed here by @Felipe Faria made me to post this answer.

I will forget now about all HFT-trading-tricks and just decompose the concept of latency masking ( asking 150+ API calls across a global internet for some data is by far not a true [PARALLEL] process-flow organisation ).

The example.com url-target, used in the simplified test code, looks for my test-site having about some ~ 104-116 [ms] network transport-latency. So my side has about that amount of CPU-idle time once each request has been dispatched over the network ( and there will never be an answer arriving sooner than that ~ 100 ms ).

Here, the time, the ( principally that very loooooooooooong ) latency, can become hidden right by letting the CPU handle more threads do another request, as the one that have already sent one, no matter what, have to wait. This is called a latency-masking and it may help reduce the end-to-end run-time, even inside GIL-stepped pythonic threads ( that otherwise must have been for years fully avoided in the true and hardcore HPC-grade parallel-code ). For details, one may read about GIL-release time, and one may also deduce, or observe in test, the upper-limit of such latency-masking, if there are going to be way more requests in the salvo, than there are GIL-lock thread switching ( forced transfers of execution ), than one's actual network transport-latency.


So the latency masking tricks demasked:

The simplified experiment has shown, that the fired salvo of 25 test calls took ~ 273 [ms] in batch,
whereas each of the 25, latency-masked, calls has taken ~ 232.6-266.9 [ms] i.e. the responses were heavily latency-masked, being just loosely concurrently monitored from "outside" of their respective context-managers by the orchestrating tooling inside the event-loop async / await mechanics, for their respective async completion.

The powers of the latency-masking could be seen from the fact, that the first call launch_id:< 0> to the API has finished as the last but one (!)

This was possible as the url-retrieve process takes so long without having anything to do with the local CPU-workload ( which is IDLE until anything gets there-and-back to first start any processing on the fetched data ).

This is also the reason for which latency-masking does not help "so impressively well" for processes, where each [ns]-shaving is in place, like the said HPC-processing or in HFT-trading engines.

>>> pass;         anAsyncEventLOOP = asyncio.get_event_loop()
>>> aClk.start(); anAsyncEventLOOP.run_until_complete( mainAsyncLoopPAYLOAD_wrapper( anAsyncEventLOOP, 25 ) );aClk.stop()

Now finished urlGetCOROUTINE(launch_id:<11>) E2E execution took    246193 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<21>) E2E execution took    247013 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:< 2>) E2E execution took    237278 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<20>) E2E execution took    247111 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<23>) E2E execution took    252462 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<16>) E2E execution took    237591 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:< 1>) E2E execution took    243398 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:< 9>) E2E execution took    232643 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:< 6>) E2E execution took    247308 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<17>) E2E execution took    250773 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<24>) E2E execution took    245354 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<10>) E2E execution took    259812 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<13>) E2E execution took    241707 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:< 3>) E2E execution took    258745 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:< 4>) E2E execution took    243659 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<18>) E2E execution took    249252 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:< 8>) E2E execution took    245812 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<12>) E2E execution took    244684 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:< 5>) E2E execution took    257701 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<15>) E2E execution took    243001 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:< 7>) E2E execution took    256776 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<22>) E2E execution took    266979 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<14>) E2E execution took    252169 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:< 0>) E2E execution took    263190 [us](Safety anAsyncTIMEOUT was set 10 [s])
Now finished urlGetCOROUTINE(launch_id:<19>) E2E execution took    247591 [us](Safety anAsyncTIMEOUT was set 10 [s])
273829

pass;    import aiohttp, asyncio, async_timeout
from zmq import Stopwatch

async def urlGetCOROUTINE( aSESSION, anURL2GET, aCoroID = -1, anAsyncTIMEOUT = 10 ):
    aLocalCLK = Stopwatch()
    res       = ""
    ############################################# SECTION-UNDER-TEST
    aLocalCLK.start() ##############################################
    with async_timeout.timeout( anAsyncTIMEOUT ):# RESPONSE ######## TIMEOUT-PROTECTED
         async  with aSESSION.get( anURL2GET ) as aRESPONSE:
            while True:
                    pass;  aGottenCHUNK = await   aRESPONSE.content.read( 1024 )
                    if not aGottenCHUNK:
                        break
                    res += str( aGottenCHUNK )
            await                                 aRESPONSE.release()
    ################################################################ TIMEOUT-PROTECTED
    aTestRunTIME_us = aLocalCLK.stop() ########## SECTION-UNDER-TEST

    print( "Now finished urlGetCOROUTINE(launch_id:<{2: >2d}>) E2E execution took {0: >9d} [us](Safety anAsyncTIMEOUT was set {1: >2d} [s])".format( aTestRunTIME_us, anAsyncTIMEOUT, aCoroID ) )
    return ( aTestRunTIME_us, len( res ) )

async def mainAsyncLoopPAYLOAD_wrapper( anAsyncLOOP_to_USE, aNumOfTESTs = 10, anUrl2GoGET = "http://example.com" ):
    '''
    aListOfURLs2GET = [ "https://www.irs.gov/pub/irs-pdf/f1040.pdf",
                        "https://www.forexfactory.com/news",
                         ...
                         ]
    '''
    async with aiohttp.ClientSession( loop = anAsyncLOOP_to_USE ) as aSESSION:
        aBlockOfAsyncCOROUTINEs_to_EXECUTE = [ urlGetCOROUTINE(      aSESSION, anUrl2GoGET, launchID ) for launchID in range( min( aNumOfTESTs, 1000 ) ) ]
        await asyncio.gather( *aBlockOfAsyncCOROUTINEs_to_EXECUTE )

Epilogue: the same work may take 5x longer ...

All the run-time times are in [us].

Both the Process- and Thread-based forms of a just-[CONCURRENT]-processing have accumulated immense instantiation overheads and results-collection and transfer overheads ( the threading with additional, indeterministic variability of run-time ), whereas the pure-[SERIAL] process-flow was by far the fastest and the most efficient way to get the job done. For larger f-s these overheads will grow beyond all limits and may soon introduce O/S swapping and other system-resources deteriorating side-effects, so be careful.

                                                                                                                                                                              602283L _ _ _ _ _ _ _ _ _
>>> aClk.start(); len( str( Parallel( n_jobs = -1                        )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   512459L [PAR]   QUAD-CORE .multiprocessing
>>> aClk.start(); len( str( Parallel( n_jobs = -1                        )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   511655L
>>> aClk.start(); len( str( Parallel( n_jobs = -1                        )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   506400L
>>> aClk.start(); len( str( Parallel( n_jobs = -1                        )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   508031L
>>> aClk.start(); len( str( Parallel( n_jobs = -1                        )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   514377L _ _ _ _ _ _ _ _ _

>>> aClk.start(); len( str( Parallel( n_jobs =  1                        )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   123185L [PAR] SINGLE-CORE
>>> aClk.start(); len( str( Parallel( n_jobs =  1                        )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   122631L
>>> aClk.start(); len( str( Parallel( n_jobs =  1                        )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   125139L
>>> aClk.start(); len( str( Parallel( n_jobs =  1                        )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   124358L _ _ _ _ _ _ _ _ _

>>> aClk.start(); len( str( Parallel( n_jobs = -1, backend = 'threading' )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   213990L [PAR]   QUAD-CORE .threading
>>> aClk.start(); len( str( Parallel( n_jobs = -1, backend = 'threading' )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   201337L
>>> aClk.start(); len( str( Parallel( n_jobs = -1, backend = 'threading' )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   199485L
>>> aClk.start(); len( str( Parallel( n_jobs = -1, backend = 'threading' )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   198174L
>>> aClk.start(); len( str( Parallel( n_jobs = -1, backend = 'threading' )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   169204L
>>> aClk.start(); len( str( Parallel( n_jobs = -1, backend = 'threading' )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   168658L
>>> aClk.start(); len( str( Parallel( n_jobs = -1, backend = 'threading' )( delayed( np.math.factorial ) ( 2**f ) for f in range( 14 ) ) [-1] ) ); aClk.stop()        28504   171793L _ _ _ _ _ _ _ _ _

>>> aClk.start(); len( str(                                                        [ np.math.factorial(    2**f ) for f in range( 14 ) ] [-1] ) ); aClk.stop()        28504   121401L [SEQ] SINGLE-CORE
                                                                                                                                                                              126381L

Solution 3

For the faint of heart, providing another way of writing @user3666197 code above (Also see related question):

import aiohttp, asyncio, async_timeout
import time


async def get_url(session, url, corou_id=-1, timeout=10):
    start = time.time()
    res = ""
    # SECTION-UNDER-TEST
    async with session.get(url, timeout=timeout) as response:
        while True:
            chunk = await response.content.read(1024)
            if not chunk:
                break
            res += str(chunk)
        await response.release()
    end = time.time()
    runtime = end - start

    print(
        "Now finished get_url(launch_id:<{2: >2d}>) E2E execution took {0: >9d} [us](Safety timeout was set {1: >2d} [s])".format(
            runtime, timeout, corou_id))
    return runtime, len(res)


async def async_payload_wrapper(async_loop, number_of_tests=10, url="http://example.com"):
    '''
    urls = [ "https://www.irs.gov/pub/irs-pdf/f1040.pdf",
                        "https://www.forexfactory.com/news",
                         ...
                         ]
    '''
    async with aiohttp.ClientSession(loop=async_loop) as session:
        corou_to_execute = [get_url(session, url, launchID) for launchID in
                                              range(min(number_of_tests, 1000))]
        await asyncio.gather(*corou_to_execute)
if __name__ == '__main__':
    event_loop = asyncio.get_event_loop()
    event_loop.run_until_complete(async_payload_wrapper(event_loop, 25))

Solution 4

I created a package for it

Github: https://github.com/singhsidhukuldeep/request-boost

PyPi: https://pypi.org/project/request-boost/

pip install request-boost

from request_boost import boosted_requests

results = boosted_requests(urls=urls)
print(results)

More control:

from request_boost import boosted_requests

results = boosted_requests(urls=urls, no_workers=16, max_tries=5, timeout=5, headers=headers)
print(results)
# Sample data
number_of_sample_urls = 1000
urls = [ f'https://postman-echo.com/get?random_data={test_no}' for test_no in range(number_of_sample_urls) ]
headers = [{'sample_header':test_no} for test_no in range(number_of_sample_urls)]

Go to https://pypi.org/project/request-boost/ Go to https://pypi.org/project/request-boost/ Go to https://pypi.org/project/request-boost/ Go to https://pypi.org/project/request-boost/

DOCS:

boosted_requests(urls, no_workers=8, max_tries=3, timeout=10, headers=None)

Get data from APIs in parallel by creating workers that process in the background
    :param urls: list of URLS
    :param no_workers: maximum number of parallel processes
    :param max_tries: Maximum number of tries before failing for a specific URL
    :param timeout: Waiting time per request
    :param headers: Headers if any for the URL requests
    :return: List of response for each API (order is maintained)
Share:
43,371
Pedro Serra
Author by

Pedro Serra

Updated on July 09, 2022

Comments

  • Pedro Serra
    Pedro Serra almost 2 years

    I need to keep making many requests to about 150 APIs, on different servers. I work with the trading, time is crucial, I can not waste 1 millisecond.

    The solution and problems I found were these:

    • Async using Asyncio: I do not want to rely on a single thread, for some reason it may get stucked.
    • Threads: Is it really reliable on Python to use threads? Do I have the risk of 1 thread make
      other get stucked?
    • Multiprocesses: If a have on process controlling the others, would I loose to much time in interprocess communication?

    Maybe a solution that uses all of that.

    If there is no really good solution in Python, what should I use instead?

    # Using Asyncio
    import asyncio
    import requests
    
    async def main():
        loop = asyncio.get_event_loop()
        future1 = loop.run_in_executor(None, requests.get, 'http://www.google.com')
        future2 = loop.run_in_executor(None, requests.get, 'http://www.google.co.uk')
        response1 = await future1
        response2 = await future2
        print(response1.text)
        print(response2.text)
    
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
    
    
    # Using Threads
    from threading import Thread
    
    def do_api(url):
        #...
        #...
    
    #...
    #...
    for i in range(50):
        t = Thread(target=do_apis, args=(url_api[i],))
        t.start()