This notebook accompanies this blog post, and includes an example of parallel code created with the multiprocessing
module.
First import the required functions from the standard library:
# Bake in constant arguments to a function
from functools import partial
# Package for running code in parallel
from multiprocessing import Pool
Then we define our "worker" function, which performs the work of a single loop iteration.
def worker_fn(changing_stuff, const_value):
'''A function that takes as it's first parameter
the values that change for each loop iteration, and
the remaining parameters do not change with each
loop iteration
'''
a, b = changing_stuff
return (a + b) * const_value
Set up an example problem to work with
avalues = range(20)
bvalues = range(100, 120)
const = 100
"Bake in" our constant arguments to create our partially applied function, which takes a single parameter.
fn = partial(worker_fn, const_value=const)
Create an iterable of everything that changes for every loop iteration
# this must be an `iterable`, so a list or generator
zipped_args = zip(avalues, bvalues)
Finally run the code in parallel, using all of the processors on the machine
# Python 3
with Pool() as pool: # 3
results = pool.map(fn, zipped_args) # 4
# or if you're stuck with Python 2:
# pool = Pool()
# results = pool.map(fn, zipped_args) # 4