We want to adapt the approach we introduced last lecture to solving *boundary value problems*. Consider the following simple example:

Here is an example solution:

In [18]:

```
using ApproxFun
B=dirichlet()
x=Fun([0.,1.])
u=[B;Derivative([0.,1.])^2+1000x^2]\[1.,2.]
ApproxFun.plot(u)
```

Out[18]:

Unlike initial value problems, where the conditions $u(0)=a$ and $u'(0)$ are specified at a single point, in boundary value problems the conditions are specified at two *different* points. This means we can't view their solution as "time-stepping": we have to solve the problem globally. We will do so by using the approach advocated last lecture of constructing discrete derivatives.

Recall the midpoint discrete derivative

$$D_n : {\hbox{Values at } \atop x_0,\ldots,x_n} \rightarrow {\hbox{Values at } \atop x_{1/2},\ldots,x_{n-1/2}}$$which is an $n \times n+1$ matrix with entries

$$D_n \triangleq {1 \over h} \begin{pmatrix} -1 & 1 \cr & -1 & 1 \cr &&\ddots & \ddots \cr &&& -1 & 1\end{pmatrix}$$where $h=1/n$ and $x_k=kh$. We will construct an approximate second derivative by now creating another midpoint discrete derivative

$$D_{n-1} : {\hbox{Values at } \atop x_{1/2},\ldots,x_{n-1/2}} \rightarrow {\hbox{Values at } \atop x_1,\ldots,x_{n-1}},$$which is $n-1 \times n$.

Because the spacing between the nodes is still $h=1/n$, when we approximate data at $x_{1/2},\ldots,x_{n-1/2}$ by trapezoids, differentiate, and evaluate at the grid $x_1,\ldots,x_{n-1}$ we get the entries:

$$D_{n-1} \triangleq {1 \over h} \begin{pmatrix} -1 & 1 \cr & -1 & 1 \cr &&\ddots & \ddots \cr &&& -1 & 1\end{pmatrix}$$where $h$ is still $1/n$.

The $n-1 \times n+1$ discrete second derivative is then specified by

$$\tilde D_n^2 \triangleq D_{n-1}D_n.$$This satisfies

$$\tilde D_n^2 : {\hbox{Values at } \atop x_{0},\ldots,x_{n}} \rightarrow {\hbox{Values at } \atop x_1,\ldots,x_{n-1}},$$that is, we map from all the nodes to the interior nodes. The entries are given by matrix multiplication as

$$\tilde D_n^2 \triangleq {1 \over h^2} \begin{pmatrix} 1 & -2 & 1 \cr &1 & -2 & 1 \cr &&\ddots & \ddots & \ddots \cr &&&1 & -2 & -1\end{pmatrix}$$**Remark** Matrices with constant diagonals are called *Toeplitz matrices*. They have been studied extensively, with a special emphasis on studying the eigenvalues and their behaviour as the dimension tends to infinity.

We can construct this matrix as `D2`

here:

In [1]:

```
# discrete first derivative
function D(h,n)
ret=zeros(n,n+1)
for k=1:n
ret[k,k]=-1/h
ret[k,k+1]=1/h
end
ret
end
# discrete second derivative
D2(h,n) = D(h,n-1)*D(h,n)
```

Out[1]:

We verify that `D2*f(x)`

gives an approximation of the second derivative at the interior nodes:

In [2]:

```
n=10
h=1/n
f=x->cos(x)
fpp=x->-cos(x) # second derivative of f
x=linspace(0.,1.,n+1) # domain nodes
r=x[2:end-1] # range nodes
norm(D2(h,n)*f(x) - fpp(r),Inf)
```

Out[2]:

**Exercise** Estimate the rate of convergence by finding $\alpha$ so that the error decays like $Cn^\alpha$.

We now set up the multiplication operator correspo representing multiplication by $a(x)$. This is the $n-1 \times n+1$ matrix

$$A_n : {\hbox{Values at } \atop x_0,\ldots,x_n} \rightarrow {\hbox{Values at } \atop x_1,\ldots,x_{n-1}}$$with entries given by

$$A_n \triangleq \begin{pmatrix} 0 & a(x_1) \cr && a(x_2) \cr &&&\ddots \cr &&&&a(x_{n-1}) & 0 \end{pmatrix}.$$We can set this up as follows:

In [3]:

```
function A(a::Function,h,n)
ret=zeros(n-1,n+1)
for k=1:n-1
ret[k,k+1]=a(k*h)
end
ret
end
```

Out[3]:

In this case, the operator is in fact, exact:

In [4]:

```
a=x->sin(x)
A(a,h,n)
norm(A(a,h,n)*f(x) - a(r).*f(r),Inf)
```

Out[4]:

So the operator $L = D^2 - a(x)$ is discretized as

$$L_n = \tilde D_n^2 - A_n$$which is a map

$$L_n : {\hbox{Values at } \atop x_0,\ldots,x_n} \rightarrow {\hbox{Values at } \atop x_1,\ldots,x_{n-1}}$$In [6]:

```
L=D2(h,n) - A(a,h,n)
```

Out[6]:

We need to represent $u(0)$ and $u(1)$ where $u$ is given at the grid $x_0,\ldots,x_n$. We see that this is accomplished via the $1 \times n+1$ row vectors

$$B_n^0 \triangleq [1,0,\cdots,0]$$In [7]:

```
B0 = [1 zeros(1,n)]
B0*f(x) - f(0)
```

Out[7]:

and $$B_n^1 \triangleq [0,0,\cdots,1]$$

In [8]:

```
B1 = [zeros(1,n) 1]
B1*f(x) - f(1)
```

Out[8]:

We now discretize the *operator*

by

$$M_n = \begin{pmatrix} B_n^0 \cr \tilde D_n^2 - A_n \cr B_n^1\end{pmatrix}$$We put the boundary conditions at the top and bottom so that `M_n`

is tridiagonal (that is, only has three non-zero bands).

In [9]:

```
L=D2(h,n) - A(a,h,n)
M=[B0;
L;
B1]
```

Out[9]:

We test that it approximates $M$:

In [10]:

```
norm(M*f(x) - [f(0.); fpp(r)-a(r).*f(r); f(1.)],Inf)
```

Out[10]:

**Exercise** Estimate the rate of convergence.

We now approximate the boundary value problem

$$M u = \begin{pmatrix} a \cr f(x) \cr b\end{pmatrix}$$by solving the discretized problem

$$M_n \mathbf w = \begin{pmatrix} a \cr f(\mathbf x[2:end-1]) \cr b\end{pmatrix}$$to find $w_k = \mathbf e_k^\top \mathbf w$ that approximates $u(x_k)$.

Here we solve the same equation as at the top:

In [13]:

```
using PyPlot
n=100
h=1/n
a=x->-1000x^2
x=linspace(0.,1.,n+1)
B0 = [1 zeros(1,n)]
B1 = [zeros(1,n) 1]
L=D2(h,n) - A(a,h,n)
M=[B0;
L;
B1]
w=M\[1.;zeros(n-1);2.]
plot(x,w);
```

We observe empirically that the method converges:

In [19]:

```
n=4000
h=1/n
a=x->-1000x^2
x=linspace(0.,1.,n+1)
B0 = [1 zeros(1,n)]
B1 = [zeros(1,n) 1]
L=D2(h,n) - A(a,h,n)
M=[B0;
L;
B1]
w=M\[1.;zeros(n-1);2.]
norm(w-u(x),Inf) # the exact solution u was calulated above
```

Out[19]:

**Exercise** What property of $M_n$ guarantees that we converge to the solution $u$ as fast as

converges?