using Plots, ComplexPhasePortrait, ApproxFun, SingularIntegralEquations, DifferentialEquations, LinearAlgebra
gr();
Dr. Sheehan Olver
This lecture we do the following:
A central theme: if you know the Jacobi operator / three-term recurrence, you know the polynomials. This is the best way to evaluate expansions in orthogonal polynomials: even for Chebyshev Tn(x)=cosnacosx, using the recurrence avoids evaluating trigonometric functions.
Every family of orthogonal polynomials has a three-term recurrence relationship:
Theorem (three-term recurrence) Suppose {pn(x)} are a family of orthogonal polynomials w.r.t. a weight w(x). Then there exists constants an≠0, bn and cn such that xp0(x)=a0p0(x)+b0p1(x)xpn(x)=cnpn−1(x)+anpn(x)+bnpn+1(x)
Proof The first part follows since p0(x) and p1(x) span all degree 1 polynomials.
The second part follows essentially because multiplication by x is "self-adjoint", that is, ⟨xf,g⟩=∫baxf(x)g(x)w(x)dx=⟨f,xg⟩
⬛️
Monic polynomials clearly have bn=1. Orthonormal polynomials have an even simpler form:
Theorem (orthonormal three-term recurrence) Suppose {qn(x)} are a family of orthonotms polynomials w.r.t. a weight w(x). Then there exists constants an and bn such that xq0(x)=a0q0(x)+b0q1(x)xqn(x)=bn−1qn−1(x)+anqn(x)+bnqn+1(x)
Proof Follows again by self-adjointness of multiplication by x: cn=⟨xqn,qn−1⟩=⟨qn,xqn−1⟩=⟨xqn−1,qn⟩=bn−1
Example Last lecture, we used the formula, derived via trigonometric manipulations, T1(x)=xT0(x)Tn+1(x)=2xTn(x)−Tn−1(x)
T = (n,x) -> cos(n*acos(x))
x = 0.5
n = 10
@show x*T(0,x) - (T(1,x))
@show x*T(n,x) - (T(n-1,x) + T(n+1,x))/2;
x * T(0, x) - T(1, x) = 1.1102230246251565e-16 x * T(n, x) - (T(n - 1, x) + T(n + 1, x)) / 2 = 5.273559366969494e-16
Corollary (symmetric three-term recurrence implies orthonormal) Suppose {pn(x)} are a family of orthogonal polynomials w.r.t. a weight w(x) such that xp0(x)=a0p0(x)+b0p1(x)xpn(x)=bn−1pn−1(x)+anpn(x)+bnpn+1(x)
Proof This follows from bn=⟨xpn,pn+1⟩‖pn+1‖2=⟨xpn+1,pn⟩‖pn+1‖2=bn‖pn‖2‖pn+1‖2
Remark We can scale w(x) by a constant without changing the orthogonality properties, hence we can make ‖p0‖=1 by changing the weight.
Remark This is beyond the scope of this course, but satisfying a three-term recurrence like this such that coefficients are bounded with p0(x)=1 is sufficient to show that there exists a w(x) (or more accurately, a Borel measure) such that pn(x) are orthogonal w.r.t. w. The relationship between the coefficients an,bn and the w(x) is an object of study in spectral theory, particularly when the coefficients are periodic, quasi-periodic or random.
We can rewrite the three-term recurrence as x(p0(x)p1(x)p2(x)⋮)=J(p0(x)p1(x)p2(x)⋮)
When the polynomials are monic, we have 1 on the superdiagonal. When we have an orthonormal basis, then J is symmetric: J=(a0b0b0a1b1b1a2b2b2a3⋱⋱⋱)
Given a polynomial expansion, J tells us how to multiply by x in coefficient space, that is, if f(x)=∞∑k=0fkpk(x)=(p0(x),p1(x),…)(f0f1f2⋮)
Example For the case of Chebyshev polynomials, we have J=(011201212012120⋱⋱⋱)
f = Fun(exp, Chebyshev())
n = ncoefficients(f) # number of coefficients
@show n
J = zeros(n,n+1)
J[1,2] = 1
for k=2:n
J[k,k-1] = J[k,k+1] = 1/2
end
J'
n = 14
15×14 Adjoint{Float64,Array{Float64,2}}: 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5
cfs = J'*f.coefficients # coefficients of x*f
xf = Fun(Chebyshev(), cfs)
xf(0.1) - 0.1*f(0.1)
4.163336342344337e-17
We can use the three-term recurrence to construct the polynomials. I think it's nicest to express this in terms of linear algebra. suppose we are given p0(x)=1 (which is pretty much always the case in practice). This can be written in matrix form as (1,0,0,0,0,…)(p0(x)p1(x)p2(x)⋮)=1
Example We can construct T0(x),…,Tn−1(x) via p0(x)=1p1(x)=xT0(x)T2(x)=2xT0(x)−T0(x)T3(x)=2xT1(x)−T1(x)⋮
function recurrence_Chebyshev(n,x)
T = zeros(n)
T[1] = 1.0
T[2] = x*T[1]
for k = 2:n-1
T[k+1] = 2x*T[k] - T[k-1]
end
T
end
trig_Chebyshev(n,x) = [cos(k*acos(x)) for k=0:n-1]
trig_Chebyshev (generic function with 1 method)
n = 10
recurrence_Chebyshev(n, 0.1) - trig_Chebyshev(n,0.1) |>norm
1.1102230246251565e-16
n = 10000
@time recurrence_Chebyshev(n, 0.1)
@time trig_Chebyshev(n,0.1);
0.000031 seconds (6 allocations: 78.359 KiB) 0.000269 seconds (6 allocations: 78.359 KiB)
We can use this to evaluate functions as well: f(x)=(p0(x),p1(x),…)(f0f1⋮)=e⊤0L−⊤x(f0f1⋮)
Example For Chebyshev, we want to solve the system (1−x121−x1212−x⋱12⋱12⋱−x12)⏟L⊤x(γ0⋮γn−1)
then f(x)=γ0.
function clenshaw_Chebyshev(f,x)
n = length(f)
γ = zeros(n)
γ[n] = 2f[n]
γ[n-1] = 2f[n-1] +2x*f[n]
for k = n-2:-1:1
γ[k] = 2f[k] + 2x*γ[k+1] - γ[k+2]
end
γ[2] = f[2] + x*γ[3] - γ[4]/2
γ[1] = f[1] + x*γ[2] - γ[3]/2
γ[1]
end
clenshaw_Chebyshev (generic function with 1 method)
f = Fun(exp, Chebyshev())
clenshaw_Chebyshev(f.coefficients, 0.1) - exp(0.1)
-1.3322676295501878e-15
With some high performance computing tweeks, this can be made more accurate: this is the algorithm used for evaluating functions in ApproxFun:
f(0.1) - exp(0.1)
0.0
Remember last lecture we introduced the Gram–Schmidt approach to constructing OPs. But the three-term recurrence means we can simplify it, and calculate the recurrence coefficients at the same time:
Proposition (Gram–Schmidt) Define p0(x)=1q0(x)=1‖p0‖an=⟨xqn,qn⟩bn−1=⟨xqn,qn−1⟩pn+1(x)=xqn(x)−anqn(x)−bn−1qn−1(x)qn+1(x)=pn+1(x)‖pn‖
Remark This can be made a bit more efficient by using ‖pn‖ to calculate bn.
x = Fun()
w = exp(x)
ip = (f,g) -> sum(f*g*w)
nrm = f -> sqrt(ip(f,f))
n = 10
q = Array{Fun}(undef, n)
p = Array{Fun}(undef, n)
a = zeros(n)
b = zeros(n)
p[1] = Fun(1, -1 .. 1 )
q[1] = p[1]/nrm(p[1])
p[2] = x*q[1]
a[1] = ip(p[2],q[1])
p[2] -= a[1]*q[1]
q[2] = p[2]/nrm(p[2])
for k=2:n-1
p[k+1] = x*q[k]
b[k-1] =ip(p[k+1],q[k-1])
a[k] = ip(p[k+1],q[k])
p[k+1] = p[k+1] - a[k]q[k] - b[k-1]q[k-1]
q[k+1] = p[k+1]/nrm(p[k+1])
end
ip(q[5],q[2])
3.2959746043559335e-16
p = plot(; legend=false)
for k=1:10
plot!(q[k])
end
p
norm(x*q[4] - (b[3]q[3] + a[4]q[4] + b[4]q[5]))
9.45569169473238e-16