#!/usr/bin/env python # coding: utf-8 # > This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python. # # # 13.4. Simulating a stochastic differential equation # 1. Let's import NumPy and matplotlib. # In[ ]: import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') # 2. Let's define a few parameters for our model. # In[ ]: sigma = 1. # Standard deviation. mu = 10. # Mean. tau = .05 # Time constant. # 3. We also define a few simulation parameters. # In[ ]: dt = .001 # Time step. T = 1. # Total time. n = int(T/dt) # Number of time steps. t = np.linspace(0., T, n) # Vector of times. # 4. We also define renormalized variables (to avoid recomputing these constants at every time step). # In[ ]: sigma_bis = sigma * np.sqrt(2. / tau) sqrtdt = np.sqrt(dt) # 5. We create a vector that will contain all successive values of our process during the simulation. # In[ ]: x = np.zeros(n) # 6. Now, we simulate the process with the Euler-Maruyama method. It is really like the standard Euler method for ODEs, but with an extra stochastic term (which is just a scaled normal random variable). We will give the equation of the process along with the details of this method in *How it works...*. # In[ ]: for i in range(n-1): x[i+1] = x[i] + dt*(-(x[i]-mu)/tau) + \ sigma_bis * sqrtdt * np.random.randn() # 7. Let's display the evolution of the process. # In[ ]: plt.figure(figsize=(6,3)); plt.plot(t, x); # 8. Now, we are going to take a look at the time evolution of the distribution of the process. To do that, we will simulate many independent realizations of the same process in a vectorized way. We define a vector `X` that will contain all realizations of the process at a given time (i.e. we do not keep the memory of all realizations at all times). This vector will be completely updated at every time step. We will show the estimated distribution (histograms) at several points in time. # In[ ]: ntrials = 10000 X = np.zeros(ntrials) # In[ ]: # We create bins for the histograms. bins = np.linspace(-2., 14., 100); plt.figure(figsize=(6,3)); for i in range(n): # We update the process independently for all trials. X += dt*(-(X-mu)/tau) + \ sigma_bis*sqrtdt*np.random.randn(ntrials) # We display the histogram for a few points in time. if i in (5, 50, 900): hist, _ = np.histogram(X, bins=bins) plt.plot((bins[1:]+bins[:-1])/2, hist, {5: '-', 50: '.', 900: '-.',}[i], label="t={0:.2f}".format(i*dt)); plt.legend(); # The distribution of the process tends to a Gaussian distribution with mean $\mu=10$ and standard deviation $\sigma=1$. The process would be stationary if the initial distribution was also a Gaussian with the adequate parameters. # > You'll find all the explanations, figures, references, and much more in the book (to be released later this summer). # # > [IPython Cookbook](http://ipython-books.github.io/), by [Cyrille Rossant](http://cyrille.rossant.net), Packt Publishing, 2014 (500 pages).