This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.

# 4.4. Profiling the memory usage of your code with memory_profiler¶

Standard imports.

In [ ]:
import numpy as np


After installing memory_profiler, we can export the IPython extension.

In [ ]:
%load_ext memory_profiler


For %lprun to work, we need to encapsulate the code in a function, and to save it in a Python script.

In [ ]:
%%writefile simulation.py
import numpy as np

def step(*shape):
# Create a random n-vector with +1 or -1 values.
return 2 * (np.random.random_sample(shape) < .5) - 1

def simulate(iterations, n=10000):
s = step(iterations, n)
x = np.cumsum(s, axis=0)
bins = np.arange(-30, 30, 1)
y = np.vstack([np.histogram(x[i,:], bins)[0] for i in range(iterations)])
return y


Now, we need to execute this script to load the function in the interactive namespace.

In [ ]:
import simulation


Let's execute the function under the control of the line profiler.

In [ ]:
%mprun -T mprof0 -f simulation.simulate simulation.simulate(50)

In [ ]:
print(open('mprof0', 'r').read())


Let's run the simulation with 10 times more iterations.

In [ ]:
%mprun -T mprof1 -f simulation.simulate simulation.simulate(iterations=500)

In [ ]:
print(open('mprof1', 'r').read())


You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).

IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages).