layout: default title: "The Alternating Direction Implicit Method" date: 2013-12-03 author: Georg R Walther

## tags: wip¶

This post is work in progress

# The Alternating Direction Implicit Method¶

Just as the Crank-Nicolson (CN) method for reaction-diffusion systems with one space dimension, the Alternating Direction Implicit (ADI) method is used commonly for reaction-diffusion systems with two space dimensions.

The ADI method has been described thoroughly many times, for instance by Dehghan.

In the following discussion we will consider a reaction-diffusion system similar to the one we studied previously and we will use analogous Neumann boundary conditions.

$$\frac{\partial u}{\partial t} = D \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} \right) + f(u),$$

$$\frac{\partial u}{\partial x}\Bigg|_{x = 0, L_x} = 0,$$

$$\frac{\partial u}{\partial y}\Bigg|_{y = 0, L_y} = 0,$$

where $u(x,y,t)$ is our concentration variable, $D$ is the diffusion coefficient of $u$, $f$ is the reaction term, and $L_x$ and $L_y$ are the extent of our domain in the $x$ and $y$ direction respectively.

Since ADI and CN are somewhat related, let us try to derive the most basic properties of the ADI in relation to CN.

## Grid Construction¶

Analogous to the $(x,t)$-grid used in CN, we need to construct an $(x,y,t)$-grid for this problem with two space dimensions.

In the simplest case, we construct a regular grid as follows

$$t_n = n \Delta t,~ n = 0, \ldots, N-1,$$

$$x_j = j \Delta x,~ j = 0, \ldots, J-1,$$

$$y_i = i \Delta y,~ i = 0, \ldots, I-1,$$

where $N$, $J$, and $I$ are the number of grid points in the $t$-, $x$-, and $y$-direction.

$\Delta t$, $\Delta x$, and $\Delta y$ are defined as follows

$$\Delta t = \frac{T}{N-1},~ \Delta x = \frac{L_x}{J-1},~ \Delta y = \frac{L_y}{I-1},$$

where $T$ is the total amount of time we are interested in.

As before we will refer to the numerical approximations of the unknown analytic solution $u(x,y,t)$ in our grid points as

$$U(j \Delta x, i \Delta y, n \Delta t) \approx u(j \Delta x, i \Delta y, n \Delta t),$$

and we use the shorthand $U(j \Delta x, i \Delta y, n \Delta t) = U_{j,i}^n.$ We also refer to the grid point $(j \Delta x, i \Delta y, n \Delta t)$ as $(j,i,n)$.

When integrating the above reaction-diffusion equation numerically, we can still make use of the CN stencil.

Applying the CN stencil to our reaction-diffusion equation on our $(x,y,t)$-grid, we obtain:

$$\frac{U_{j,i}^{n+1} - U_{j,i}^n}{\Delta t} = \frac{D}{2 \Delta x^2} \left( U_{j+1,i}^n - 2 U_{j,i}^n + U_{j-1,i}^n + U_{j+1,i}^{n+1} - 2 U_{j,i}^{n+1} + U_{j-1,i}^{n+1}\right) + \frac{D}{2 \Delta y^2} \left( U_{j,i+1}^n - 2 U_{j,i}^n + U_{j,i-1}^n + U_{j,i+1}^{n+1} - 2 U_{j,i}^{n+1} + U_{j,i-1}^{n+1}\right) + f(U_{j,i}^n).$$

Let us define $\sigma_x = \frac{D \Delta t}{2 \Delta x^2}$ and $\sigma_y = \frac{D \Delta t}{2 \Delta y^2}$.

To reorder our stencil into a linear system we need to define a new index that combines indices $i$ and $j$:

$$k = j + i J$$

This new index $k$ flattens the two-dimensional spatial part of our $(x,y,t)$-grid into a (one-dimensional) vector and our "flattening methodology" is analogous to the row-major order representation of matrices.

To illustrate, when we use the $k$ index grid points $(j,i,n)$ and $(j,i+1,n)$ become $(k,n)$ and $(k+J,n)$ respectively.

Our stencil therefore becomes

$$U_{k}^{n+1} - U_{k}^n = \sigma_x \left( U_{k+1}^n - 2 U_{k}^n + U_{k-1}^n + U_{k+1}^{n+1} - 2 U_{k}^{n+1} + U_{k-1}^{n+1}\right) + \sigma_y \left( U_{k+J}^n - 2 U_{k}^n + U_{k-J}^n + U_{k+J}^{n+1} - 2 U_{k}^{n+1} + U_{k-J}^{n+1}\right) + \Delta t f(U_{k}^n),$$

and reordering this expression, we obtain

$$-\sigmay U{k-J}^{n+1} -\sigmax U{k-1}^{n+1} • (1 + 2 \sigma_x + 2 \sigmay) U{k}^{n+1} -\sigmax U{k+1}^{n+1} -\sigmay U{k+J}^{n+1} = \sigmay U{k-J}^n +\sigmax U{k-1}^n +(1 - 2 \sigma_x - 2 \sigmay) U{k}^n +\sigmax U{k+1}^n +\sigmay U{k+J}^n +\Delta t f(U_{k}^n).$$

If we wrote this out in matrix notation, we would see banded matrices both on the left- and right-hand side (analogous to matrices $A$ and $B$ in the one-dimensional case):

On the left-hand side of this liner system, matrix $A$ contains a tridiagonal core as before with non-zero elements, $-\sigma_y$, in the $J$-th sub- and superdiagonals. Matrix $B$ on the right-hand side looks similar but with $\sigma_y$ in the $J$-th sub- and superdiagonals.

Numerical inversion of banded matrices, such as $A$, is expensive with standard methods - at the very least we would not be able to use the standard Thomas algorithm for tridiagonal matrices.

The ADI stencil makes use of a trick that allows us to invert tridiagonal matrices, just as with the CN stencil, instead of banded matrices.

The ADI stencil also suggests a straightforward way to implement a parallelized variant of the ADI method and we will discuss and demonstrate this below.

The trick used in constructing the ADI method is to split our time step $\Delta t$ into two and apply two different stencils in each half time step: therefore to increment time by one time step $n \rightarrow n+1$ in grid point $(j,i,n)$, we first compute $U_{j,i}^{n+1/2}$ ($n \rightarrow n+1/2$) and then compute $U_{j,i}^{n+1}$ ($n+1/2 \rightarrow n+1$). Both of these stencils are chosen such that the resulting linear system is tridiagonal.

The two ADI stencils bring in the $x$- and $y$-direction at the next time point (implicit v explicit finite difference stencils) alternatingely - inspiring the name of the ADI method:

$$\frac{U_{j,i}^{n+1/2} - U_{j,i}^n}{\Delta t / 2} = \frac{D}{\Delta x^2} \left( U_{j+1,i}^{n+1/2} - 2 U_{j,i}^{n+1/2} + U_{j-1,i}^{n+1/2} \right) + \frac{D}{\Delta y^2} \left( U_{j,i+1}^n - 2 U_{j,i}^n + U_{j,i-1}^n \right) + f(U_{j,i}^n),$$

$$\frac{U_{j,i}^{n+1} - U_{j,i}^{n+1/2}}{\Delta t / 2} = \frac{D}{\Delta x^2} \left( U_{j+1,i}^{n+1/2} - 2 U_{j,i}^{n+1/2} + U_{j-1,i}^{n+1/2} \right) + \frac{D}{\Delta y^2} \left( U_{j,i+1}^{n+1} - 2 U_{j,i}^{n+1} + U_{j,i-1}^{n+1} \right) + f(U_{j,i}^{n+1/2}).$$

As before we define $\sigma_x = \frac{D \Delta t}{2 \Delta x^2}$ and $\sigma_y = \frac{D \Delta t}{2 \Delta y^2}$.

Reordering both stencils, we obtain the following systems of linear equations

$$-\sigma_x U_{j-1,i}^{n+1/2} + (1+2\sigma_x) U_{j,i}^{n+1/2} - \sigma_x U_{j+1,i}^{n+1/2} = \sigma_y U_{j,i+1}^n + (1-2\sigma_y) U_{j,i}^n + \sigma_y U_{j,i-1}^n + \frac{\Delta t}{2} f(U_{j,i}^n),$$

$$-\sigma_y U_{j,i+1}^{n+1} + (1+2\sigma_y) U_{j,i}^{n+1} - \sigma_y U_{j,i-1}^{n+1} = \sigma_x U_{j-1,i}^{n+1/2} + (1-2\sigma_x) U_{j,i}^{n+1/2} + \sigma_x U_{j+1,i}^{n+1/2} + \frac{\Delta t}{2} f(U_{j,i}^{n+1/2}).$$

### Family of Linear Systems in the $x$-Direction¶

Consider the stencil pictured in Dehghan Figure 1 and let us think of the matrix defined by $\left(U_{j,i}^n\right)$ ($j=0,\ldots,J-1$, $i=0\ldots,I-1$, $n$ held constant) as a concentration plane.

We realize that for fixed index $i$ the first of our two equations above define a system of linear equations similar to the linear system we obtained for CN with one space dimension. As for CN with one space dimension we need to amend certain entries of our linear system to accommodate our Neumann boundary conditions on the edges of our grid. In fact we need to accommodate boundary conditions in both the $x$- and $y$-direction.

Our first equation does not need to be modified for grid points that lie "within" the spatial part of our grid, i.e. $(j,i,n)$ for $j=1,\ldots,J-2$, $i=1,\ldots,I-2$, and $n=0,1,2,\ldots$:

$$-\sigma_x U_{j-1,i}^{n+1/2} + (1+2\sigma_x) U_{j,i}^{n+1/2} - \sigma_x U_{j+1,i}^{n+1/2} = \sigma_y U_{j,i+1}^n + (1-2\sigma_y) U_{j,i}^n + \sigma_y U_{j,i-1}^n + \frac{\Delta t}{2} f(U_{j,i}^n).$$

Let us take a look at what happens for grid points on the far left and right with $j=0,J-1$ and $i=1,\ldots,I-2$:

$$j=0:~ (1+\alpha_x) U_{0,i}^{n+1/2} - \alpha_x U_{1,i}^{n+1/2} = \alpha_y U_{0,i-1}^n + (1-2\alpha_y) U_{0,i}^n + \alpha_y U_{0,i+1}^n + \Delta t f(U_{0,i}^n),$$

$$j=J-1:~ -\alpha_x U_{J-2,i}^{n+1/2} + (1+\alpha_x) U_{J-1,i}^{n+1/2} = \alpha_y U_{J-1,i-1}^n + (1-2\alpha_y) U_{J-1,i}^n + \alpha_y U_{J-1,i+1}^n + \Delta t f(U_{J-1,i}^n).$$

On the left-hand side of the linear system defined by these three expressions (imagine varying $j=0,1,\ldots,J-1$) we can already make out essentially the same matrix as we constructed for CN with one space dimension. Note that this matrix is the same for all indeces $i=0,1,\ldots,I-1$.

This completes incorporating our boundary conditions in the $x$-direction. Before we move on to doing the same with our boundary conditions in the $y$-direction let us state clearly that our $(x,y)$-concentration plane has right-handed orientation meaning that $x$ increases left to right and $y$ increases bottom to top. Our grid has the same orientation so that index $j$ incrases left to right and index $i$ increases bottom to top.

Let us now take a look at this equation for $j=1,\ldots,J-2$ and $i=0$ (bottom) and $i=I-1$ (top):

$$i=0:~ -\alpha_x U_{j-1,0}^{n+1/2} + (1+2\alpha_x) U_{j,0}^{n+1/2} - \alpha_x U_{j+1,0}^{n+1/2} = (1-\alpha_y) U_{j,0}^n + \alpha_y U_{j,1}^n + \Delta t f(U_{j,0}^n),$$

$$i=I-1:~ -\alpha_x U_{j-1,I-1}^{n+1/2} + (1+2\alpha_x) U_{j,I-1}^{n+1/2} - \alpha_x U_{j+1,I-1}^{n+1/2} = \alpha_y U_{j,I-2}^n + (1-\alpha_y) U_{j,I-1}^n + \Delta t f(U_{j,I-1}^n).$$

To rewrite these equations in compact matrix notation, let us define a horizontal slice (left to right) through our concentration plane as vector

$$\mathbf{U}_{x,i}^n = \begin{bmatrix}U_{0,i}^n, & \ldots, & U_{J-1,i}^n \end{bmatrix}$$

We can now combine all of the above and write the first of our two families of ADI linear systems compactly:

$$A \mathbf{U}_{x,i}^{n+1/2} = \mathbf{b}_i + \mathbf{f}\left( \Delta t \mathbf{U}_{x,i}^n \right),~i=0,\ldots,I-1,$$

where

$$A = \begin{bmatrix} 1+\alpha_x & -\alpha_x & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & 0\\ -\alpha_x & 1+2\alpha_x & -\alpha_x & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ 0 & -\alpha_x & 1+2\alpha_x & -\alpha_x & \cdots & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \ddots & \ddots & \ddots & \ddots & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\alpha_x & 1+2\alpha_x & -\alpha_x \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\alpha_x & 1+\alpha_x \end{bmatrix}.$$

The form of vector $\mathbf{b}_i$ depends on $i$:

$$i=I-1:~ \mathbf{b}_{I-1} = \begin{bmatrix} (1-\alpha_y) U_{0,I-1}^n + \alpha_y U_{0,I-2}^n \\ \vdots \\ (1-\alpha_y) U_{J-1,I-1}^n + \alpha_y U_{J-1,I-2}^n \end{bmatrix}$$

$$i=I-2,\ldots,1:~ \mathbf{b}_i = \begin{bmatrix} \alpha_y U_{0,i+1}^n + (1-2\alpha_y) U_{0,i}^n + \alpha_y U_{0,i-1}^n \\ \vdots \\ \alpha_y U_{J-1,i+1}^n + (1-2\alpha_y) U_{J-1,i}^n + \alpha_y U_{J-1,i-1}^n \end{bmatrix}$$

$$i=0:~ \mathbf{b}_0 = \begin{bmatrix} \alpha_y U_{0,1}^n + (1-\alpha_y) U_{0,0}^n \\ \vdots \\ \alpha_y U_{J-1,1}^n + (1-\alpha_y) U_{J-1,0}^n \end{bmatrix}.$$

The reaction term vector is

$$\mathbf{f}\left( \Delta t \mathbf{U}_{x,i}^n \right) = \begin{bmatrix} \Delta t f(U_{0,i}^n), & \ldots, & \Delta t f(U_{J-1,i}^n) \end{bmatrix}$$

### Family of Linear Systems in the $y$-Direction¶

Let us first define a vertical (from top to bottom) slice through our concentration plane (note our comment on right-handedness above) as:

$$\mathbf{U}_{y,j}^n = \begin{bmatrix}U_{j,I-1}^n, & U_{j,I-2}^n, & \ldots, & U_{j,0}^n \end{bmatrix}$$

Following an equivalent procedure to above for the second family of ADI linear systems we obtain:

$$C \mathbf{U}_{y,j}^{n+1} = \mathbf{d}_j + \mathbf{f}\left( \Delta t \mathbf{U}_{y,j}^{n+1/2} \right),~j=0,\ldots,J-1,$$

where

$$C = \begin{bmatrix} 1+\alpha_y & -\alpha_y & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & 0\\ -\alpha_y & 1+2\alpha_y & -\alpha_y & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ 0 & -\alpha_y & 1+2\alpha_y & -\alpha_y & \cdots & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \ddots & \ddots & \ddots & \ddots & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\alpha_y & 1+2\alpha_y & -\alpha_y \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\alpha_y & 1+\alpha_y \end{bmatrix}.$$

The form of vector $\mathbf{d}_j$ depends on $j$:

$$j=0:~ \mathbf{d}_0 = \begin{bmatrix} (1-\alpha_x) U_{0,I-1}^{n+1/2} + \alpha_x U_{1,I-1}^{n+1/2} \\ (1-\alpha_x) U_{0,I-2}^{n+1/2} + \alpha_x U_{1,I-2}^{n+1/2} \\ \vdots \\ (1-\alpha_x) U_{0,0}^{n+1/2} + \alpha_x U_{1,0}^{n+1/2} \end{bmatrix}$$

$$j=1,\ldots,J-2:~ \mathbf{d}_j = \begin{bmatrix} \alpha_x U_{j-1,I-1}^{n+1/2} + (1-2\alpha_x) U_{j,I-1}^{n+1/2} + \alpha_x U_{j+1,I-1}^{n+1/2} \\ \alpha_x U_{j-1,I-2}^{n+1/2} + (1-2\alpha_x) U_{j,I-2}^{n+1/2} + \alpha_x U_{j+1,I-2}^{n+1/2} \\ \vdots \\ \alpha_x U_{j-1,0}^{n+1/2} + (1-2\alpha_x) U_{j,0}^{n+1/2} + \alpha_x U_{j+1,0}^{n+1/2} \end{bmatrix}$$

$$j=J-1:~ \mathbf{d}_{J-1} = \begin{bmatrix} \alpha_x U_{J-2,I-1}^{n+1/2} + (1-\alpha_x) U_{J-1,I-1}^{n+1/2} \\ \alpha_x U_{J-2,I-2}^{n+1/2} + (1-\alpha_x) U_{J-1,I-2}^{n+1/2} \\ \vdots \\ \alpha_x U_{J-2,0}^{n+1/2} + (1-\alpha_x) U_{J-1,0}^{n+1/2} \end{bmatrix}$$

The reaction term vector is

$$\mathbf{f}\left( \Delta t \mathbf{U}_{y,j}^{n+1/2} \right) = \begin{bmatrix} \Delta t f(U_{j,I-1}^{n+1/2}), & \Delta t f(U_{j,I-2}^{n+1/2}), & \ldots, & \Delta t f(U_{j,0}^{n+1/2}) \end{bmatrix}$$

To summarize, the ADI stencil generates two families of linear systems that we need to solve iteratively:

$$A \mathbf{U}_{x,i}^{n+1/2} = \mathbf{b}_i + \mathbf{f}\left( \Delta t \mathbf{U}_{x,i}^n \right),~i=0,\ldots,I-1,$$

$$C \mathbf{U}_{y,j}^{n+1} = \mathbf{d}_j + \mathbf{f}\left( \Delta t \mathbf{U}_{y,j}^{n+1/2} \right),~j=0,\ldots,J-1.$$

Upon closer inspection, we realize that we can solve the $I$ linear systems of the first family in parallel. To see this let us take a look at two arbitrary linear systems out of this family:

$$A \mathbf{U}_{x,i_1}^{n+1/2} = \mathbf{b}_{i_1} + \mathbf{f}\left( \Delta t \mathbf{U}_{x,i_1}^n \right),$$

$$A \mathbf{U}_{x,i_2}^{n+1/2} = \mathbf{b}_{i_2} + \mathbf{f}\left( \Delta t \mathbf{U}_{x,i_2}^n \right).$$

As we can see, there is no interdependence between the linear systems for indeces $i_1$ and $i_2$ that would prevent us from solving these in parallel.

The same reasoning can be applied to our second family of linear systems. We can solve the linear systems of all indeces $j$ of that family in parallel.

## An ADI Example in Python¶

In [ ]:
import numpy
from matplotlib import pyplot

In [ ]:
%matplotlib inline

In [2]:
J = 5
I = 10

print numpy.array([[str((j,i)) for j in range(J)] for i in range(I-1,-1,-1)])

[['(0, 9)' '(1, 9)' '(2, 9)' '(3, 9)' '(4, 9)']
['(0, 8)' '(1, 8)' '(2, 8)' '(3, 8)' '(4, 8)']
['(0, 7)' '(1, 7)' '(2, 7)' '(3, 7)' '(4, 7)']
['(0, 6)' '(1, 6)' '(2, 6)' '(3, 6)' '(4, 6)']
['(0, 5)' '(1, 5)' '(2, 5)' '(3, 5)' '(4, 5)']
['(0, 4)' '(1, 4)' '(2, 4)' '(3, 4)' '(4, 4)']
['(0, 3)' '(1, 3)' '(2, 3)' '(3, 3)' '(4, 3)']
['(0, 2)' '(1, 2)' '(2, 2)' '(3, 2)' '(4, 2)']
['(0, 1)' '(1, 1)' '(2, 1)' '(3, 1)' '(4, 1)']
['(0, 0)' '(1, 0)' '(2, 0)' '(3, 0)' '(4, 0)']]

In [3]:
L_x = 1.
L_y = 2.
T = 1.

J = 100
I = 200
N = 1000

dx = float(L_x)/float(J-1)
dy = float(L_y)/float(I-1)
dt = float(T)/float(N-1)

x_grid = numpy.array([j*dx for j in range(J)])
y_grid = numpy.array([i*dy for i in range(I)])
t_grid = numpy.array([n*dt for n in range(N)])

In [4]:
D_U = 0.1
D_V = 10.

In [5]:
tot_protein = 2.26

In [6]:
no_high = 10
U =  numpy.array([[0.1 if (I-1)-i+1 > no_high else 2.0 for j in range(J)] for i in range(I)])

U_protein = sum(sum(U))*dx*dy
V_protein_px = float(tot_protein-U_protein)/float(I*J*dx*dy)
V = numpy.array([[V_protein_px for i in range(0,J)] for i in range(I)])

print 'Initial protein mass', (sum(sum(U))+sum(sum(V)))*(dx*dy)

Initial protein mass 2.26

In [7]:
fig, ax = pyplot.subplots()
ax.set_xlim(left=0., right=(J-1)*dx)
ax.set_ylim(bottom=0., top=(I-1)*dy)
heatmap = ax.pcolor(x_grid, y_grid, U, vmin=0., vmax=2.1)
colorbar = pyplot.colorbar(heatmap)
colorbar.set_label('concentration U')

In [8]:
alpha_x_U = D_U*dt/(2.*dx*dx)
alpha_y_U = D_U*dt/(2.*dy*dy)

alpha_x_V = D_V*dt/(2.*dx*dx)
alpha_y_V = D_V*dt/(2.*dy*dy)

In [9]:
A_U = numpy.diagflat([-alpha_x_U for j in range(J-1)], -1)+\
numpy.diagflat([1.+alpha_x_U]+[1.+2.*alpha_x_U for j in range(J-2)]+[1.+alpha_x_U], 0)+\
numpy.diagflat([-alpha_x_U for j in range(J-1)], 1)

C_U = numpy.diagflat([-alpha_y_U for i in range(I-1)], -1)+\
numpy.diagflat([1.+alpha_y_U]+[1.+2.*alpha_y_U for i in range(I-2)]+[1.+alpha_y_U], 0)+\
numpy.diagflat([-alpha_y_U for i in range(I-1)], 1)

A_V = numpy.diagflat([-alpha_x_V for j in range(J-1)], -1)+\
numpy.diagflat([1.+alpha_x_V]+[1.+2.*alpha_x_V for j in range(J-2)]+[1.+alpha_x_V], 0)+\
numpy.diagflat([-alpha_x_V for j in range(J-1)], 1)

C_V = numpy.diagflat([-alpha_y_V for i in range(I-1)], -1)+\
numpy.diagflat([1.+alpha_y_V]+[1.+2.*alpha_y_V for i in range(I-2)]+[1.+alpha_y_V], 0)+\
numpy.diagflat([-alpha_y_V for i in range(I-1)], 1)

In [10]:
f_vec = lambda U, V: numpy.multiply(float(dt)/float(2.), numpy.subtract(numpy.multiply(V,

k0 = 0.067

In [11]:
b_t_stencil_U = numpy.array([[(1.-alpha_y_U) for j in range(J)],
[alpha_y_U for j in range(J)]])
b_c_stencil_U = numpy.array([[alpha_y_U for j in range(J)],
[1.-2.*alpha_y_U for j in range(J)],
[alpha_y_U for j in range(J)]])
b_b_stencil_U = numpy.array([[alpha_y_U for j in range(J)],
[(1.-alpha_y_U) for j in range(J)]])

b_t_stencil_V = numpy.array([[(1.-alpha_y_V) for j in range(J)],
[alpha_y_V for j in range(J)]])
b_c_stencil_V = numpy.array([[alpha_y_V for j in range(J)],
[1.-2.*alpha_y_V for j in range(J)],
[alpha_y_V for j in range(J)]])
b_b_stencil_V = numpy.array([[alpha_y_V for j in range(J)],
[(1.-alpha_y_V) for j in range(J)]])

d_l_stencil_U = numpy.array([[1.-alpha_x_U, alpha_x_U] for i in range(I)])
d_c_stencil_U = numpy.array([[alpha_x_U, 1.-2.*alpha_x_U, alpha_x_U] for i in range(I)])
d_r_stencil_U = numpy.array([[alpha_x_U,1.-alpha_x_U] for i in range(I)])

d_l_stencil_V = numpy.array([[1.-alpha_x_V, alpha_x_V] for i in range(I)])
d_c_stencil_V = numpy.array([[alpha_x_V, 1.-2.*alpha_x_V, alpha_x_V] for i in range(I)])
d_r_stencil_V = numpy.array([[alpha_x_V, 1.-alpha_x_V] for i in range(I)])

f_curr = f_vec(U,V)

def b_U(i):
if i <= I-2 and i >= 1:
U_y = U[[i+1, i, i-1], :]
return numpy.add(U_y+b_c_stencil_U, f_curr[[i+1, i, i-1], :]).sum(axis=0)
elif i == I-1:
U_y = U[[I-1, I-2], :]
elif i == 0:
U_y = U[[1, 0], :]

def b_V(i):
if i <= I-2 and i >= 1:
V_y = V[[i+1, i, i-1], :]
return numpy.subtract(V_y+b_c_stencil_V, f_curr[[i+1, i, i-1], :]).sum(axis=0)
elif i == I-1:
V_y = V[[I-1, I-2], :]
return numpy.subtract(V_y+b_t_stencil_V, f_curr[[I-1, I-2], :]).sum(axis=0)
elif i == 0:
V_y = V[[1, 0], :]
return numpy.subtract(V_y+b_b_stencil_V, f_curr[[1, 0], :]).sum(axis=0)

def d_U(j):
if j <= J-2 and j >= 1:
U_x = U[:, [j-1, j, j+1]]
return numpy.add(U_x+d_c_stencil_U, f_curr[:, [j-1, j, j+1]]).sum(axis=1)
if j == 0:
U_x = U[:, [0, 1]]
if j == J-1:
U_x = U[:, [J-2, J-1]]

def d_V(j):
if j <= J-2 and j >= 1:
V_x = V[:, [j-1, j, j+1]]
return numpy.subtract(V_x+d_c_stencil_V, f_curr[:, [j-1, j, j+1]]).sum(axis=1)
if j == 0:
V_x = V[:, [0, 1]]
return numpy.subtract(V_x+d_l_stencil_V, f_curr[:, [0, 1]]).sum(axis=1)
if j == J-1:
V_x = V[:, [J-2, J-1]]
return numpy.subtract(V_x+d_r_stencil_V, f_curr[:, [J-2, J-1]]).sum(axis=1)

In [ ]:
for n in range(N):
print 'time = %.6f' % (n*dt)
print 'total protein = %.6f' % ((dx*dy)*(sum(sum(U))+sum(sum(V))))
f_curr = f_vec(U,V)

for i in range(I):
U[i, :] = numpy.linalg.solve(A_U, b_U(i))
V[i, :] = numpy.linalg.solve(A_V, b_V(i))
for j in range(J):
U[:, j] = numpy.linalg.solve(C_U, d_U(j))
V[:, j] = numpy.linalg.solve(C_V, d_V(j))

In [ ]:
fig, ax = pyplot.subplots()
ax.set_xlim(left=0., right=(J-1)*dx)
ax.set_ylim(bottom=0., top=(I-1)*dy)
heatmap = ax.pcolor(x_grid, y_grid, U, vmin=0., vmax=2.1)
colorbar = pyplot.colorbar(heatmap)
colorbar.set_label('concentration U')