Parfor for Python
Solution 1
There are many Python frameworks for parallel computing. The one I happen to like most is IPython, but I don't know too much about any of the others. In IPython, one analogue to parfor would be client.MultiEngineClient.map()
or some of the other constructs in the documentation on quick and easy parallelism.
Solution 2
The one built-in to python would be multiprocessing
docs are here. I always use multiprocessing.Pool
with as many workers as processors. Then whenever I need to do a for-loop like structure I use Pool.imap
As long as the body of your function does not depend on any previous iteration then you should have near linear speed-up. This also requires that your inputs and outputs are pickle
-able but this is pretty easy to ensure for standard types.
UPDATE: Some code for your updated function just to show how easy it is:
from multiprocessing import Pool
from itertools import product
output = np.zeros((N,N))
pool = Pool() #defaults to number of available CPU's
chunksize = 20 #this may take some guessing ... take a look at the docs to decide
for ind, res in enumerate(pool.imap(Fun, product(xrange(N), xrange(N))), chunksize):
output.flat[ind] = res
Solution 3
Jupyter Notebook
To see an example consider you want to write the equivalence of this Matlab code on in Python
matlabpool open 4
parfor n=0:9
for i=1:10000
for j=1:10000
s=j*i
end
end
n
end
disp('done')
The way one may write this in python particularly in jupyter notebook. You have to create a function in the working directory (I called it FunForParFor.py) which has the following
def func(n):
for i in range(10000):
for j in range(10000):
s=j*i
print(n)
Then I go to my Jupyter notebook and write the following code
import multiprocessing
import FunForParFor
if __name__ == '__main__':
pool = multiprocessing.Pool(processes=4)
pool.map(FunForParFor.func, range(10))
pool.close()
pool.join()
print('done')
This has worked for me! I just wanted to share it here to give you a particular example.
Solution 4
This can be done elegantly with Ray, a system that allows you to easily parallelize and distribute your Python code.
To parallelize your example, you'd need to define your functions with the @ray.remote
decorator, and then invoke them with .remote
.
import numpy as np
import time
import ray
ray.init()
# Define the function. Each remote function will be executed
# in a separate process.
@ray.remote
def HeavyComputationThatIsThreadSafe(i, j):
n = i*j
time.sleep(0.5) # Simulate some heavy computation.
return n
N = 10
output_ids = []
for i in range(N):
for j in range(N):
# Remote functions return a future, i.e, an identifier to the
# result, rather than the result itself. This allows invoking
# the next remote function before the previous finished, which
# leads to the remote functions being executed in parallel.
output_ids.append(HeavyComputationThatIsThreadSafe.remote(i,j))
# Get results when ready.
output_list = ray.get(output_ids)
# Move results into an NxN numpy array.
outputs = np.array(output_list).reshape(N, N)
# This program should take approximately N*N*0.5s/p, where
# p is the number of cores on your machine, N*N
# is the number of times we invoke the remote function,
# and 0.5s is the time it takes to execute one instance
# of the remote function. For example, for two cores this
# program will take approximately 25sec.
There are a number of advantages of using Ray over the multiprocessing module. In particular, the same code will run on a single machine as well as on a cluster of machines. For more advantages of Ray see this related post.
Note: One point to keep in mind is that each remote function is executed in a separate process, possibly on a different machine, and thus the remote function's computation should take more than invoking a remote function. As a rule of thumb a remote function's computation should take at least a few 10s of msec to amortize the scheduling and startup overhead of a remote function.
Solution 5
I've always used Parallel Python but it's not a complete analog since I believe it typically uses separate processes which can be expensive on certain operating systems. Still, if the body of your loops are chunky enough then this won't matter and can actually have some benefits.
Dat Chu
Like Computer Vision, love learning new things, fascinated with tinkering and making stuff. Know JS, TS, C++, Python and Java. Builds Deep Nets on the side for fun and profit. A proud Linux aficionado although have been doing more cloud than bare metal recently.
Updated on September 03, 2020Comments
-
Dat Chu over 3 years
I am looking for a definitive answer to MATLAB's parfor for Python (Scipy, Numpy).
Is there a solution similar to parfor? If not, what is the complication for creating one?
UPDATE: Here is a typical numerical computation code that I need speeding up
import numpy as np N = 2000 output = np.zeros([N,N]) for i in range(N): for j in range(N): output[i,j] = HeavyComputationThatIsThreadSafe(i,j)
An example of a heavy computation function is:
import scipy.optimize def HeavyComputationThatIsThreadSafe(i,j): n = i * j return scipy.optimize.anneal(lambda x: np.sum((x-np.arange(n)**2)), np.random.random((n,1)))[0][0,0]
-
David Heffernan over 13 years+1 Didn't know about client.MultiEngineClient even though I do use IPython. Thanks for the steer!
-
Dat Chu over 13 yearsIt is not apparent to me whether I can run a code sped up with IPython parallel computing framework in script mode, i.e. not running through ipython.
-
Sven Marnach over 13 years@Dat Chu: Of course you can. Just write the commands you would type at the prompt in a file an run it with Python. (Is this what you are asking for?)
-
Sven Marnach over 13 yearsYou should replace
output[ind]
byoutput.flat[ind]
to make the code work. (output
is a two-dimensional array and would need two indices.) -
JudoWill over 13 years@Sven: Thanks ... that comes from switching between matlab and python all the time.
-
tsh over 12 yearsUp-to-date link to the documentation on quick and easy parallelism.
-
A. Donda almost 9 yearsSven, I think you mean parfor where you write matfor.
-
Nimrod Morag over 4 yearsthis only speeds up if the computation is entirely supported by numba, see docs for list
-
River almost 4 yearsActually that is where the pain come from. There are too many of them and none of them suit for all propose. And you need to try out which works and which doesn't yourself. And which is faster in which case. Sometimes some implementation will be even slower than no parallel.
-
Sven Marnach almost 4 years@River Yes, optimising code is tedious, and parallelism is hard. I suggest you start with multiprocessing, which is part of the standard library.
-
japreiss almost 4 years
-
eric almost 2 yearsI like this as it is very useful for practical use. It would be really helpful maybe to replace
i
with something more explicit, and if you explained why the use ofimap_unordered()
instead ofimap()
ormap()
.