using PyPlot
First, we copy some code from the IJulia notebook posted from lecture:
# construct the (M+1)xM matrix D, not including the 1/dx factor
diff1(M) = [ [1.0 zeros(1,M-1)]; diagm(ones(M-1),1) - eye(M) ]
# sparse version (the lazy way):
sdiff1(M) = sparse(diff1(M))
# make the discrete -Laplacian in 2d, with Dirichlet boundaries
function Laplacian(Nx, Ny, Lx, Ly)
dx = Lx / (Nx+1)
dy = Ly / (Ny+1)
Dx = sdiff1(Nx) / dx
Dy = sdiff1(Ny) / dy
Ax = Dx' * Dx
Ay = Dy' * Dy
return kron(speye(Ny), Ax) + kron(Ay, speye(Nx))
end
Laplacian (generic function with 1 method)
Compute the first six $-\nabla^2$ eigenvalues in a $100\times 100$ grid for a radius-1 cylinder:
Lx = 1
Ly = 1
Nx = 100
Ny = 100
x = linspace(-Lx,Lx,Nx+2)[2:end-1] # a column vector
y = linspace(-Ly,Ly,Ny+2)[2:end-1]' # a row vector
r = sqrt(x.^2 .+ y.^2) # use broadcasting (.+) to make Nx x Ny matrix of radii
i = find(r .< 1) # and get indices of points inside the cylinder
A = Laplacian(Nx,Ny,2Lx,2Ly)
Ai = A[i,i] # to make a submatrix for the Laplacian inside the cylinder
λi, Ui = eigs(Ai, which="SM");
λi
6-element Array{Float64,1}: 5.7062 14.4833 14.4833 25.985 26.0355 30.0471
Warning: Possible conflict in library symbol dsaupd_ Warning: Possible conflict in library symbol dseupd_
Compare the first eigenvalue with the analytical one, from the root of the Bessel function $J_0(x)$:
k01 = so.newton(x -> besselj(0,x), 2)
λi1 = k01^2
(λi1- λi[1]) / λi1
0.013311839384716175
Hooray, the fractional difference is pretty small, about 1%! Now, let's repeat it for bigger grids:
λi_200 = let Nx = 200, Ny = 200,
x = linspace(-Lx,Lx,Nx+2)[2:end-1],
y = linspace(-Ly,Ly,Ny+2)[2:end-1]',
r = sqrt(x.^2 .+ y.^2),
i = find(r .< 1),
A = Laplacian(Nx,Ny,2Lx,2Ly),
Ai = A[i,i],
result = eigs(Ai, which="SM", nev=1, maxiter=10000)
result[1][1]
end
5.746460127868359
λi_400 = let Nx = 400, Ny = 400,
x = linspace(-Lx,Lx,Nx+2)[2:end-1],
y = linspace(-Ly,Ly,Ny+2)[2:end-1]',
r = sqrt(x.^2 .+ y.^2),
i = find(r .< 1),
A = Laplacian(Nx,Ny,2Lx,2Ly),
Ai = A[i,i],
result = eigs(Ai, which="SM", nev=1, maxiter=10000)
result[1][1]
end
5.764358138404327
(λi1- λi_200) / λi1, (λi1- λi_400) / λi1,
(0.006350450307793745,0.0032556145804557303)
The error is definitely decreasing, but it is only halving with each doubling of the resolution:
(λi1- λi_200) / (λi1- λi[1]), (λi1- λi_400) / (λi1- λi_200)
(0.4770528042191477,0.512658854516222)
That is, we only have first-order convergence: errors $O(\Delta x)$, rather than the second-order accuracy we might naively expect from our center-difference approximations. The problem is that we are making a first-order error in our boundary conditions. The boundary pixels that we are setting to zero come from a square grid and don't lie exactly on the radius-1 circle; the distance between our "staircased"/gridded boundary and the true boundary is proportional to $\Delta x$, so this is a first-order error.
Lx = 1
Ly = 1
Nx = 100
Ny = 100
x = linspace(-Lx,Lx,Nx+2)[2:end-1] # a column vector
y = linspace(-Ly,Ly,Ny+2)[2:end-1]' # a row vector
r = sqrt(x.^2 .+ y.^2)
dx = Lx / (Nx+1)
dy = Ly / (Ny+1)
# we'll also need x and y on the v and w grids:
xv = linspace(-Lx,Lx,Nx+2)[1:end-1] + dx/2
yw = linspace(-Ly,Ly,Ny+2)[1:end-1]' + dy/2
rv = sqrt(xv.^2 .+ y.^2)
rw = sqrt(x.^2 .+ yw.^2)
# construct G from Dx and Dy via Kronecker products:
Dx = sdiff1(Nx) / dx
Dy = sdiff1(Ny) / dy
G = [kron(speye(Ny), Dx); kron(Dy, speye(Nx))]
# The C matrices multiply by c(r) = r^2 + 1 on the v and w grids:
cv = rv.^2 + 1
cw = rw.^2 + 1
Cg = spdiagm([reshape(cv, prod(size(cv))), reshape(cw, prod(size(cw)))], 0)
# we put it all together to get our A0 matrix on the square domain:
A0 = -G' * Cg * G
# then we pick out the parts inside the cylinder as before
i = find(r .< 1)
Ai = A0[i,i]
8000x8000 sparse matrix with 39600 Float64 nonzeros: [1 , 1] = -81693.0 [2 , 1] = 20344.3 [25 , 1] = 20264.3 [1 , 2] = 20344.3 [2 , 2] = -81401.0 [3 , 2] = 20274.2 [26 , 2] = 20192.3 [2 , 3] = 20274.2 [3 , 3] = -81141.0 [4 , 3] = 20212.3 ⋮ [7974, 7998] = 19932.3 [7997, 7998] = 20184.3 [7998, 7998] = -80685.0 [7999, 7998] = 20242.3 [7975, 7999] = 19996.3 [7998, 7999] = 20242.3 [7999, 7999] = -80937.0 [8000, 7999] = 20308.3 [7976, 8000] = 20068.2 [7999, 8000] = 20308.3 [8000, 8000] = -81221.0
# following the lecture notebook:
λi, Ui = eigs(Ai, which="SM", nev=1)
u = zeros(Nx,Ny)
u[i] = Ui[:,1]
imshow(u, extent=[-Ly,Ly,-Lx,Lx], cmap="RdBu")
contour(y,x, (r.<1) * 1.0)
PyObject <matplotlib.contour.QuadContourSet instance at 0x11ac508c0>
Now, we'll plot $u(r)$, comparing to the analytical solution for $c=1$:
plot(r[Nx/2:end,Ny/2], u[Nx/2:end,Ny/2], "ro-")
plot(0:0.01:1, besselj(0, k01 * (0:0.01:1)) / 45, "b--")
xlabel(L"radius $r/R$")
ylabel(L"eigenfunctions $u_0(r)$")
legend([L"$c(r) = r^2 + 1$", L"$c = 1"])
savefig("pset5-f13-2c.pdf")
Compared to the $c=1$ case, the solution is more concentrated at small $r$. This is consistent with what we expect from the min–max theorem, since this eigenfunction "wants" to minimize the Rayleigh quotient $\int c|\nabla u|^2 / \int |u|^2$, so it "wants" to be concentrated in the regions of smallest $c$, and this occurs in the center (small $r$).