Many successful numerical solution methods for differential equations, including the finite element method, aim at approximating the unknown function by a sum
where ψi(x) are prescribed functions and c0,…,cN are unknown coefficients to be determined. Solution methods for differential equations utilizing (1) must have a principle for constructing N+1 equations to determine c0,…,cN. Then there is a machinery regarding the actual construction of the equations for c0,…,cN, in a particular problem. Finally, there is a solve phase for computing the solution c0,…,cN of the N+1 equations.
Especially in the finite element method, the machinery for constructing the discrete equations to be implemented on a computer is quite comprehensive, with many mathematical and implementational details entering the scene at the same time. From an ease-of-learning perspective it can therefore be wise to introduce the computational machinery for a trivial equation: u=f. Solving this equation with f given and u of the form (1), means that we seek an approximation u to f. This approximation problem has the advantage of introducing most of the finite element toolbox, but without involving demanding topics related to differential equations (e.g., integration by parts, boundary conditions, and coordinate mappings). This is the reason why we shall first become familiar with finite element approximation before addressing finite element methods for differential equations.
First, we refresh some linear algebra concepts about approximating vectors in vector spaces. Second, we extend these concepts to approximating functions in function spaces, using the same principles and the same notation. We present examples on approximating functions by global basis functions with support throughout the entire domain. That is, the functions are in general nonzero on the entire domain. Third, we introduce the finite element type of basis functions globally. These basis functions will later, in Function approximation by finite elements, be used with local support (meaning that each function is nonzero except in a small part of the domain) to enhance stability and efficiency. We explain all the details of the computational algorithms involving such functions. Four types of approximation principles are covered: 1) the least squares method, 2) the L2 projection or Galerkin method, 3) interpolation or collocation, and 4) the regression method.
We shall start by introducing two fundamental methods for determining the coefficients ci in (1). These methods will be introduced for approximation of vectors. Using vectors in vector spaces to bring across the ideas is believed to be more intuitive to the reader than starting directly with functions in function spaces. The extension from vectors to functions will be trivial as soon as the fundamental ideas are understood.
The first method of approximation is called the least squares method and consists in finding ci such that the difference f−u, measured in a certain norm, is minimized. That is, we aim at finding the best approximation u to f, with the given norm as measure of "distance". The second method is not as intuitive: we find u such that the error f−u is orthogonal to the space where u lies. This is known as projection, or in the context of differential equations, the idea is also well known as Galerkin's method. When approximating vectors and functions, the two methods are equivalent, but this is no longer the case when applying the principles to differential equations.
Let f=(3,5) be a vector in the xy plane and suppose we want to approximate this vector by a vector aligned in the direction of another vector that is restricted to be aligned with some vector (a,b). Figure depicts the situation. This is the simplest approximation problem for vectors. Nevertheless, for many readers it will be wise to refresh some basic linear algebra by consulting a textbook. Familiarity with fundamental operations on inner product vector spaces are assumed in the forthcoming text.
Approximation of a two-dimensional vector in a one-dimensional vector space.
We introduce the vector space V spanned by the vector ψ0=(a,b):
We say that ψ0 is a basis vector in the space V. Our aim is to find the vector
which best approximates the given vector f=(3,5). A reasonable criterion for a best approximation could be to minimize the length of the difference between the approximate u and the given f. The difference, or error e=f−u, has its length given by the norm
where (e,e) is the inner product of e and itself. The inner product, also called scalar product or dot product, of two vectors u=(u0,u1) and v=(v0,v1) is defined as
Remark. We should point out that we use the notation (⋅,⋅) for two different things: (a,b) for scalar quantities a and b means the vector starting at the origin and ending in the point (a,b), while (u,v) with vectors u and v means the inner product of these vectors. Since vectors are here written in boldface font there should be no confusion. We may add that the norm associated with this inner product is the usual Euclidean length of a vector, i.e.,
We can rewrite the expressions of the right-hand side in a more convenient form for further use:
This rewrite results from using the following fundamental rules for inner product spaces:
Minimizing E(c0) implies finding c0 such that
It turns out that E has one unique minimum and no maximum point. Now, when differentiating (6) with respect to c0, note that none of the inner product expressions depend on c0, so we simply get
Setting the above expression equal to zero and solving for c0 gives
which in the present case, with ψ0=(a,b), results in
For later, it is worth mentioning that setting the key equation (10) to zero and ordering the terms lead to
or
This implication of minimizing E is an important result that we shall make much use of.
We shall now show that minimizing ||e||2 implies that e is orthogonal to any vector v in the space V. This result is visually quite clear from Figure (think of other vectors along the line (a,b): all of them will lead to a larger distance between the approximation and f). Then we see mathematically that e is orthogonal to any vector v in the space V and we may express any v∈V as v=sψ0 for any scalar parameter s (recall that two vectors are orthogonal when their inner product vanishes). Then we calculate the inner product
Therefore, instead of minimizing the square of the norm, we could demand that e is orthogonal to any vector in V, which in our present simple case amounts to a single vector only. This method is known as projection. (The approach can also be referred to as a Galerkin method as explained at the end of the section Approximation of general vectors.)
Mathematically, the projection method is stated by the equation
An arbitrary v∈V can be expressed as sψ0, s∈R, and therefore (14) implies
which means that the error must be orthogonal to the basis vector in the space V:
which is what we found in (13) from the least squares computations.
Let us generalize the vector approximation from the previous section to vectors in spaces with arbitrary dimension. Given some vector f, we want to find the best approximation to this vector in the space
We assume that the space has dimension N+1 and that basis vectors ψ0,…,ψN are linearly independent so that none of them are redundant. Any vector u∈V can then be written as a linear combination of the basis vectors, i.e.,
where cj∈R are scalar coefficients to be determined.
Now we want to find c0,…,cN, such that u is the best approximation to f in the sense that the distance (error) e=f−u is minimized. Again, we define the squared distance as a function of the free parameters c0,…,cN,
Minimizing this E with respect to the independent variables c0,…,cN is obtained by requiring
since the expression to be differentiated is a sum and only one term, ci(f,ψi), contains ci (this term is linear in ci). To understand this differentiation in detail, write out the sum specifically for, e.g, N=3 and i=1.
The last term in (15) is more tedious to differentiate. It can be wise to write out the double sum for N=1 and perform differentiation with respect to c0 and c1 to see the structure of the expression. Thereafter, one can generalize to an arbitrary N and observe that
Then
Since each of the two sums is missing the term ci(ψi,ψi), we may split the very last term in two, to get exactly that "missing" term for each sum. This idea allows us to write
It then follows that setting
implies
Moving the first term to the right-hand side shows that the equation is actually a linear system for the unknown parameters c0,…,cN:
where
We have changed the order of the two vectors in the inner product according to (9):
simply because the sequence i-j looks more aesthetic.
In analogy with the "one-dimensional" example in the section Approximation of planar vectors, it holds also here in the general case that minimizing the distance (error) e is equivalent to demanding that e is orthogonal to all v∈V:
Since any v∈V can be written as v=∑Ni=0ciψi, the statement (22) is equivalent to saying that
for any choice of coefficients c0,…,cN. The latter equation can be rewritten as
If this is to hold for arbitrary values of c0,…,cN, we must require that each term in the sum vanishes, which means that
These N+1 equations result in the same linear system as (19):
and hence
So, instead of differentiating the E(c0,…,cN) function, we could simply use (22) as the principle for determining c0,…,cN, resulting in the N+1 equations (23).
The names least squares method or least squares approximation are natural since the calculations consists of minimizing ||e||2, and ||e||2 is a sum of squares of differences between the components in f and u. We find u such that this sum of squares is minimized.
The principle (22), or the equivalent form (23), is known as projection. Almost the same mathematical idea was used by the Russian mathematician Boris Galerkin to solve differential equations, resulting in what is widely known as Galerkin's method.
Let V be a function space spanned by a set of basis functions ψ0,…,ψN,
such that any function u∈V can be written as a linear combination of the basis functions:
That is, we consider functions as vectors in a vector space - a so-called function space - and we have a finite set of basis functions that span the space just as basis vectors or unit vectors span a vector space.
The index set Is is defined as Is={0,…,N} and is from now on used both for compact notation and for flexibility in the numbering of elements in sequences.
For now, in this introduction, we shall look at functions of a single variable x: u=u(x), ψj=ψj(x), j∈Is. Later, we will almost trivially extend the mathematical details to functions of two- or three-dimensional physical spaces. The approximation (24) is typically used to discretize a problem in space. Other methods, most notably finite differences, are common for time discretization, although the form (24) can be used in time as well.
Given a function f(x), how can we determine its best approximation u(x)∈V? A natural starting point is to apply the same reasoning as we did for vectors in the section Approximation of general vectors. That is, we minimize the distance between u and f. However, this requires a norm for measuring distances, and a norm is most conveniently defined through an inner product. Viewing a function as a vector of infinitely many point values, one for each value of x, the inner product of two arbitrary functions f(x) and g(x) could intuitively be defined as the usual summation of pairwise "components" (values), with summation replaced by integration:
To fix the integration domain, we let f(x) and ψi(x) be defined for a domain Ω⊂R. The inner product of two functions f(x) and g(x) is then
The distance between f and any function u∈V is simply f−u, and the squared norm of this distance is
Minimizing this function of N+1 scalar variables {ci}i∈Is, requires differentiation with respect to ci, for all i∈Is. The resulting equations are very similar to those we had in the vector case, and we hence end up with a linear system of the form (19), with basically the same expressions:
The only difference from (19) is that the inner product is defined in terms of integration rather than summation.
As in the section Approximation of general vectors, the minimization of (e,e) is equivalent to
Inserting e=f−u in this equation and ordering terms, as in the multi-dimensional vector case, we end up with a linear system with a coefficient matrix (28) and right-hand side vector (29).
Whether we work with vectors in the plane, general vectors, or functions in function spaces, the least squares principle and the projection or Galerkin method are equivalent.
Let us apply the theory in the previous section to a simple problem: given a parabola f(x)=10(x−1)2−1 for x∈Ω=[1,2], find the best approximation u(x) in the space of all linear functions:
With our notation, ψ0(x)=1, ψ1(x)=x, and N=1. We seek
where c0 and c1 are found by solving a 2×2 linear system. The coefficient matrix has elements
The corresponding right-hand side is
Solving the linear system results in
and consequently
Figure displays the parabola and its best approximation in the space of all linear functions.
Best approximation of a parabola by a straight line.
The linear system can be computed either symbolically or
numerically (a numerical integration rule is needed in the latter case).
Let us first compute the system and its solution symbolically, i.e.,
using classical "pen and paper" mathematics with symbols.
The Python package sympy
can greatly help with this type of
mathematics, and will therefore be frequently used in this text.
Some basic familiarity with sympy
is assumed, typically
symbols
, integrate
, diff
, expand
, and simplify
. Much can be learned
by studying the many applications of sympy
that will be presented.
Below is a function for symbolic computation of the linear system,
where f(x) is given as a sympy
expression f
involving
the symbol x
, psi
is a list of expressions for {ψi}i∈Is,
and Omega
is a 2-tuple/list holding the limits of the domain Ω:
import sympy as sym
def least_squares(f, psi, Omega):
N = len(psi) - 1
A = sym.zeros(N+1, N+1)
b = sym.zeros(N+1, 1)
x = sym.Symbol('x')
for i in range(N+1):
for j in range(i, N+1):
A[i,j] = sym.integrate(psi[i]*psi[j],
(x, Omega[0], Omega[1]))
A[j,i] = A[i,j]
b[i,0] = sym.integrate(psi[i]*f, (x, Omega[0], Omega[1]))
c = A.LUsolve(b)
# Note: c is a sympy Matrix object, solution is in c[:,0]
u = 0
for i in range(len(psi)):
u += c[i,0]*psi[i]
return u, c
Observe that we exploit the symmetry of the coefficient matrix:
only the upper triangular part is computed. Symbolic integration, also in
sympy
, is often time consuming, and (roughly) halving the
work has noticeable effect on the waiting time for the computations to
finish.
Notice.
We remark that the symbols in sympy
are created and stored in
a symbol factory that is indexed by the expression used in the construction
and that repeated constructions from the same expression will not create
new objects. The following code illustrates the behavior of the
symbol factory:
from sympy import *
x0 = Symbol("x")
x1 = Symbol("x")
id(x0) ==id(x1)
True
a0 = 3.0
a1 = 3.0
id(a0) ==id(a1)
False
Obviously, sympy
may fail to successfully integrate
∫Ωψiψjdx, and
especially ∫Ωfψidx, symbolically.
Therefore, we should extend
the least_squares
function such that it falls back on
numerical integration if the symbolic integration is unsuccessful.
In the latter case, the returned value from sympy
's
integrate
function is an object of type Integral
.
We can test on this type and utilize the mpmath
module
to perform numerical integration of high precision.
Even when sympy
manages to integrate symbolically, it can
take an undesirably long time. We therefore include an
argument symbolic
that governs whether or not to try
symbolic integration. Here is a complete and
improved version of the previous function least_squares
:
def least_squares(f, psi, Omega, symbolic=True):
N = len(psi) - 1
A = sym.zeros(N+1, N+1)
b = sym.zeros(N+1, 1)
x = sym.Symbol('x')
for i in range(N+1):
for j in range(i, N+1):
integrand = psi[i]*psi[j]
if symbolic:
I = sym.integrate(integrand, (x, Omega[0], Omega[1]))
if not symbolic or isinstance(I, sym.Integral):
# Could not integrate symbolically, use numerical int.
integrand = sym.lambdify([x], integrand, 'mpmath')
I = mpmath.quad(integrand, [Omega[0], Omega[1]])
A[i,j] = A[j,i] = I
integrand = psi[i]*f
if symbolic:
I = sym.integrate(integrand, (x, Omega[0], Omega[1]))
if not symbolic or isinstance(I, sym.Integral):
# Could not integrate symbolically, use numerical int.
integrand = sym.lambdify([x], integrand, 'mpmath')
I = mpmath.quad(integrand, [Omega[0], Omega[1]])
b[i,0] = I
if symbolic:
c = A.LUsolve(b) # symbolic solve
# c is a sympy Matrix object, numbers are in c[i,0]
c = [sym.simplify(c[i,0]) for i in range(c.shape[0])]
else:
c = mpmath.lu_solve(A, b) # numerical solve
c = [c[i,0] for i in range(c.rows)]
u = sum(c[i]*psi[i] for i in range(len(psi)))
return u, c
The function is found in the file approx1D.py
.
Comparing the given f(x) and the approximate u(x) visually is done
with the following function, which utilizes sympy
's lambdify
tool to
convert a sympy
expression to a Python function for numerical
computations:
import numpy as np
import matplotlib.pyplot as plt
def comparison_plot(f, u, Omega, filename='tmp.pdf'):
x = sym.Symbol('x')
f = sym.lambdify([x], f, modules="numpy")
u = sym.lambdify([x], u, modules="numpy")
resolution = 401 # no of points in plot
xcoor = np.linspace(Omega[0], Omega[1], resolution)
exact = f(xcoor)
approx = u(xcoor)
plt.plot(xcoor, approx)
plt.plot(xcoor, exact)
plt.legend(['approximation', 'exact'])
# plt.savefig(filename)
The modules='numpy'
argument to lambdify
is important
if there are mathematical functions, such as sin
or exp
in the symbolic expressions in f
or u
, and these
mathematical functions are to be used with vector arguments, like
xcoor
above.
Both the least_squares
and comparison_plot
functions are found in
the file approx1D.py
. The
comparison_plot
function in this file is more advanced and flexible
than the simplistic version shown above. The file ex_approx1D.py
applies the approx1D
module to accomplish the forthcoming examples.
Let us use the code above to recompute the problem from the section Example of linear approximation where we want to approximate a parabola. What happens if we add an element x2 to the set of basis functions and test what the best approximation is if V is the space of all parabolic functions? The answer is quickly found by running
x = sym.Symbol('x')
f = 10*(x-1)**2-1
u, c = least_squares(f=f, psi=[1, x, x**2], Omega=[1, 2])
print(u)
print(sym.expand(f))
10*x**2 - 20*x + 9 10*x**2 - 20*x + 9
Now, what if we use ψi(x)=xi for i=0,1,…,N=40?
The output from least_squares
gives ci=0 for i>2, which
means that the method finds the perfect approximation.
When attempting to solve a system Ac=b, we may question how far off a start vector or a current approximation c0 is. The error is clearly the difference between c and c0, e=c−c0, but since we do not know the true solution c we are unable to assess the error. However, the vector c0 is the solution of the an alternative problem Ac0=b0. If the input, i.e., the right-hand sides b0 and b are close to each other then we expect the output of a solution process c and c0 to be close to each other. Furthermore, while b0 in principle is unknown, it is easily computable as b0=Ac0 and does not require inversion of A. The vector b−b0 is the so-called residual r defined by
Clearly, the error and the residual are related by
While the computation of the error requires inversion of A, which may be computationally expensive, the residual is easily computable and do only require a matrix-vector product and vector additions.
Approximation of a function via orthogonal functions, especially sinusoidal functions or orthogonal polynomials, is a very popular and successful approach. The finite element method does not make use of orthogonal functions, but functions that are "almost orthogonal".
For basis functions that are not orthogonal the condition number
of the matrix may create problems during the solution process
due to, for example, round-off errors as will be illustrated in the
following. The computational example in the section Perfect approximation
applies the least_squares
function which invokes symbolic
methods to calculate and solve the linear system. The correct
solution c0=9,c1=−20,c2=10,ci=0 for i≥3 is perfectly
recovered.
Suppose we convert the matrix and right-hand side to floating-point arrays and then solve the system using finite-precision arithmetics, which is what one will (almost) always do in real life. This time we get astonishing results! Up to about N=7 we get a solution that is reasonably close to the exact one. Increasing N shows that seriously wrong coefficients are computed. Below is a table showing the solution of the linear system arising from approximating a parabola by functions of the form u(x)=c0+c1x+c2x2+⋯+c10x10. Analytically, we know that cj=0 for j>2, but numerically we may get cj≠0 for j>2.
exact | sympy | numpy32 | numpy64 |
---|---|---|---|
9 | 9.62 | 5.57 | 8.98 |
-20 | -23.39 | -7.65 | -19.93 |
10 | 17.74 | -4.50 | 9.96 |
0 | -9.19 | 4.13 | -0.26 |
0 | 5.25 | 2.99 | 0.72 |
0 | 0.18 | -1.21 | -0.93 |
0 | -2.48 | -0.41 | 0.73 |
0 | 1.81 | -0.013 | -0.36 |
0 | -0.66 | 0.08 | 0.11 |
0 | 0.12 | 0.04 | -0.02 |
0 | -0.001 | -0.02 | 0.002 |
Column 2: The matrix and vector are converted to
the data structure mpmath.fp.matrix
and the
mpmath.fp.lu_solve
function is used to solve the system.
Column 3: The matrix and vector are converted to
numpy
arrays with data type numpy.float32
(single precision floating-point number) and solved by
the numpy.linalg.solve
function.
Column 4: As column 3, but the data type is
numpy.float64
(double
precision floating-point number).
We see from the numbers in the table that double precision performs much better than single precision. Nevertheless, when plotting all these solutions the curves cannot be visually distinguished (!). This means that the approximations look perfect, despite the very wrong values of the coefficients.
Increasing N to 12 makes the numerical solver in numpy
abort with the message: "matrix is numerically singular".
A matrix has to be non-singular to be invertible, which is a requirement
when solving a linear system. Already when the matrix is close to
singular, it is ill-conditioned, which here implies that
the numerical solution algorithms are sensitive to round-off
errors and may produce (very) inaccurate results.
The reason why the coefficient matrix is nearly singular and ill-conditioned is that our basis functions ψi(x)=xi are nearly linearly dependent for large i. That is, xi and xi+1 are very close for i not very small. This phenomenon is illustrated in Figure. There are 15 lines in this figure, but only half of them are visually distinguishable. Almost linearly dependent basis functions give rise to an ill-conditioned and almost singular matrix. This fact can be illustrated by computing the determinant, which is indeed very close to zero (recall that a zero determinant implies a singular and non-invertible matrix): 10−65 for N=10 and 10−92 for N=12. Already for N=28 the numerical determinant computation returns a plain zero.
The 15 first basis functions xi, i=0,…,14.
On the other hand, the double precision numpy
solver does run for
N=100, resulting in answers that are not significantly worse than
those in the table above, and large powers are
associated with small coefficients (e.g., cj<10−2 for 10≤j≤20 and cj<10−5 for j>20). Even for N=100 the
approximation still lies on top of the exact curve in a plot (!).
The conclusion is that visual inspection of the quality of the approximation may not uncover fundamental numerical problems with the computations. However, numerical analysts have studied approximations and ill-conditioning for decades, and it is well known that the basis {1,x,x2,x3,…,} is a bad basis. The best basis from a matrix conditioning point of view is to have orthogonal functions such that (ψi,ψj)=0 for i≠j. There are many known sets of orthogonal polynomials and other functions. The functions used in the finite element method are almost orthogonal, and this property helps to avoid problems when solving matrix systems. Almost orthogonal is helpful, but not enough when it comes to partial differential equations, and ill-conditioning of the coefficient matrix is a theme when solving large-scale matrix systems arising from finite element discretizations.
A set of sine functions is widely used for approximating functions
(note that the sines are orthogonal with respect to the L2 inner product as can be easily
verified using sympy
). Let us take
That is,
An approximation to the parabola f(x)=10(x−1)2−1 for x∈Ω=[1,2] from
the section Example of linear approximation can then be computed by the
least_squares
function from the section Implementation of the least squares method:
N = 3
import sympy as sym
x = sym.Symbol('x')
psi = [sym.sin(sym.pi*(i+1)*x) for i in range(N+1)]
f = 10*(x-1)**2 - 1
Omega = [0, 1]
u, c = least_squares(f, psi, Omega)
comparison_plot(f, u, Omega)
Figure (left) shows the oscillatory approximation of ∑Nj=0cjsin((j+1)πx) when N=3. Changing N to 11 improves the approximation considerably, see Figure (right).
Best approximation of a parabola by a sum of 3 (left) and 11 (right) sine functions.
There is an error f(0)−u(0)=9 at x=0 in Figure regardless of how large N is, because all ψi(0)=0 and hence u(0)=0. We may help the approximation to be correct at x=0 by seeking
However, this adjustment introduces a new problem at x=1 since we now get an error f(1)−u(1)=f(1)−0=−1 at this point. A more clever adjustment is to replace the f(0) term by a term that is f(0) at x=0 and f(1) at x=1. A simple linear combination f(0)(1−x)+xf(1) does the job:
This adjustment of u alters the linear system slightly. In the general case, we set
and the linear system becomes
The calculations can still utilize the least_squares
or
least_squares_orth
functions, but solve for u−b:
from src.approx1D import least_squares_orth
f0 = 0; f1 = -1
b = f0*(1-x) + x*f1
u_sum, c = least_squares_orth(f-b, psi, Omega)
u = B + u_sum
...evaluating matrix... (0,0) (1,1) (2,2) (3,3) A: [1/2, 1/2, 1/2, 1/2] b: [-40/pi**3 + 9/pi, 9/(2*pi), -40/(27*pi**3) + 3/pi, 9/(4*pi)] coeff: [-80/pi**3 + 18/pi, 9/pi, -80/(27*pi**3) + 6/pi, 9/(2*pi)] approximation: (-80/pi**3 + 18/pi)*sin(pi*x) + 9*sin(2*pi*x)/pi + (-80/(27*pi**3) + 6/pi)*sin(3*pi*x) + 9*sin(4*pi*x)/(2*pi)
Figure shows the result of the technique for ensuring the right boundary values. Even 3 sines can now adjust the f(0)(1−x)+xf(1) term such that u approximates the parabola really well, at least visually.
Best approximation of a parabola by a sum of 3 (left) and 11 (right) sine functions with a boundary term.
The choice of sine functions ψi(x)=sin((i+1)πx) has a great computational advantage: on Ω=[0,1] these basis functions are orthogonal, implying that Ai,j=0 if i≠j. In fact, when V contains the basic functions used in a Fourier series expansion, the approximation method derived in the section Approximation principles results in the classical Fourier series for f(x).
With orthogonal basis functions we can make the
least_squares
function (much) more efficient since we know that
the matrix is diagonal and only the diagonal elements need to be computed:
def least_squares_orth(f, psi, Omega):
N = len(psi) - 1
A = [0]*(N+1)
b = [0]*(N+1)
x = sym.Symbol('x')
for i in range(N+1):
A[i] = sym.integrate(psi[i]**2, (x, Omega[0], Omega[1]))
b[i] = sym.integrate(psi[i]*f, (x, Omega[0], Omega[1]))
c = [b[i]/A[i] for i in range(len(b))]
u = 0
for i in range(len(psi)):
u += c[i]*psi[i]
return u, c
As mentioned in the section Implementation of the least squares method, symbolic integration may fail or take a very long time. It is therefore natural to extend the implementation above with a version where we can choose between symbolic and numerical integration and fall back on the latter if the former fails:
def least_squares_orth(f, psi, Omega, symbolic=True):
N = len(psi) - 1
A = [0]*(N+1) # plain list to hold symbolic expressions
b = [0]*(N+1)
x = sym.Symbol('x')
for i in range(N+1):
# Diagonal matrix term
A[i] = sym.integrate(psi[i]**2, (x, Omega[0], Omega[1]))
# Right-hand side term
integrand = psi[i]*f
if symbolic:
I = sym.integrate(integrand, (x, Omega[0], Omega[1]))
if not symbolic or isinstance(I, sym.Integral):
print(('numerical integration of', integrand))
integrand = sym.lambdify([x], integrand, 'mpmath')
I = mpmath.quad(integrand, [Omega[0], Omega[1]])
b[i] = I
c = [b[i]/A[i] for i in range(len(b))]
u = 0
u = sum(c[i]*psi[i] for i in range(len(psi)))
return u, c
This function is found in the file approx1D.py
. Observe that
we here assume that
∫Ωφ2idx can always be symbolically computed,
which is not an unreasonable assumption
when the basis functions are orthogonal, but there is no guarantee,
so an improved version of the function above would implement
numerical integration also for the A[i,i]
term.
Sometimes the basis functions ψi and/or the function f
have a nature that makes symbolic integration CPU-time
consuming or impossible.
Even though we implemented a fallback on numerical integration
of ∫fφidx, considerable time might still be required
by sympy
just by attempting to integrate symbolically.
Therefore, it will be handy to have a function for fast
numerical integration and numerical solution
of the linear system. Below is such a method. It requires
Python functions f(x)
and psi(x,i)
for f(x) and ψi(x)
as input. The output is a mesh function
with values u
on the mesh with points in the array x
.
Three numerical integration methods are offered:
scipy.integrate.quad
(precision set to 10−8),
mpmath.quad
(about machine precision), and a Trapezoidal
rule based on the points in x
(unknown accuracy, but
increasing with the number of mesh points in x
).
def least_squares_numerical(f, psi, N, x,
integration_method='scipy',
orthogonal_basis=False):
import scipy.integrate
A = np.zeros((N+1, N+1))
b = np.zeros(N+1)
Omega = [x[0], x[-1]]
dx = x[1] - x[0] # assume uniform partition
for i in range(N+1):
j_limit = i+1 if orthogonal_basis else N+1
for j in range(i, j_limit):
print(('(%d,%d)' % (i, j)))
if integration_method == 'scipy':
A_ij = scipy.integrate.quad(
lambda x: psi(x,i)*psi(x,j),
Omega[0], Omega[1], epsabs=1E-9, epsrel=1E-9)[0]
elif integration_method == 'sympy':
A_ij = mpmath.quad(
lambda x: psi(x,i)*psi(x,j),
[Omega[0], Omega[1]])
else:
values = psi(x,i)*psi(x,j)
A_ij = trapezoidal(values, dx)
A[i,j] = A[j,i] = A_ij
if integration_method == 'scipy':
b_i = scipy.integrate.quad(
lambda x: f(x)*psi(x,i), Omega[0], Omega[1],
epsabs=1E-9, epsrel=1E-9)[0]
elif integration_method == 'sympy':
b_i = mpmath.quad(
lambda x: f(x)*psi(x,i), [Omega[0], Omega[1]])
else:
values = f(x)*psi(x,i)
b_i = trapezoidal(values, dx)
b[i] = b_i
c = b/np.diag(A) if orthogonal_basis else np.linalg.solve(A, b)
u = sum(c[i]*psi(x, i) for i in range(N+1))
return u, c
def trapezoidal(values, dx):
"""Integrate values by the Trapezoidal rule (mesh size dx)."""
return dx*(np.sum(values) - 0.5*values[0] - 0.5*values[-1])
Here is an example on calling the function:
def psi(x, i):
return np.sin((i+1)*x)
x = np.linspace(0, 2*pi, 501)
N = 20
u, c = least_squares_numerical(lambda x: np.tanh(x-np.pi), psi, N, x,
orthogonal_basis=True)
(0,0) (1,1) (2,2) (3,3) (4,4) (5,5) (6,6) (7,7) (8,8) (9,9) (10,10) (11,11) (12,12) (13,13) (14,14) (15,15) (16,16) (17,17) (18,18) (19,19) (20,20)
Remark.
The scipy.integrate.quad
integrator is usually much faster than
mpmath.quad
.
The principle of minimizing the distance between u and f is an intuitive way of computing a best approximation u∈V to f. However, there are other approaches as well. One is to demand that u(xi)=f(xi) at some selected points xi, i∈Is:
We recognize that the equation ∑jcjψj(xi)=f(xi) is actually a linear system with N+1 unknown coefficients {cj}j∈Is:
with coefficient matrix and right-hand side vector given by
This time the coefficient matrix is not symmetric because ψj(xi)≠ψi(xj) in general. The method is often referred to as an interpolation method since some point values of f are given (f(xi)) and we fit a continuous function u that goes through the f(xi) points. In this case the xi points are called interpolation points. When the same approach is used to approximate differential equations, one usually applies the name collocation method and xi are known as collocation points.
Given f as a sympy
symbolic expression f
, {ψi}i∈Is
as a list psi
, and a set of points {xi}i∈Is as a list or array
points
, the following Python function sets up and solves the matrix system
for the coefficients {ci}i∈Is:
def interpolation(f, psi, points):
N = len(psi) - 1
A = sym.zeros(N+1, N+1)
b = sym.zeros(N+1, 1)
psi_sym = psi # save symbolic expression
x = sym.Symbol('x')
psi = [sym.lambdify([x], psi[i], 'mpmath') for i in range(N+1)]
f = sym.lambdify([x], f, 'mpmath')
for i in range(N+1):
for j in range(N+1):
A[i,j] = psi[j](points[i])
b[i,0] = f(points[i])
c = A.LUsolve(b)
# c is a sympy Matrix object, turn to list
c = [sym.simplify(c[i,0]) for i in range(c.shape[0])]
u = sym.simplify(sum(c[i]*psi_sym[i] for i in range(N+1)))
return u, c
The interpolation
function is a part of the approx1D
module.
We found it convenient in the above function to turn the expressions f
and
psi
into ordinary Python functions of x
, which can be called with
float
values in the list points
when building the matrix and
the right-hand side. The alternative is to use the subs
method
to substitute the x
variable in an expression by an element from
the points
list. The following session illustrates both approaches
in a simple setting:
x = sym.Symbol('x')
e = x**2 # symbolic expression involving x
p = 0.5 # a value of x
v = e.subs(x, p) # evaluate e for x=p
v
type(v)
sympy.core.numbers.Float
e = lambdify([x], e) # make Python function of e
type(e)
function
v = e(p) # evaluate e(x) for x=p
v
0.25
type(v)
float
A nice feature of the interpolation or collocation method is that it avoids computing integrals. However, one has to decide on the location of the xi points. A simple, yet common choice, is to distribute them uniformly throughout the unit interval.
Let us illustrate the interpolation method by approximating our parabola f(x)=10(x−1)2−1 by a linear function on Ω=[1,2], using two collocation points x0=1+1/3 and x1=1+2/3:
x = sym.Symbol('x')
f = 10*(x-1)**2 - 1
psi = [1, x]
Omega = [1, 2]
points = [1 + sym.Rational(1,3), 1 + sym.Rational(2,3)]
u, c = interpolation(f, psi, points)
comparison_plot(f, u, Omega)
The resulting linear system becomes
with solution c0=−119/9 and c1=10. Figure (left) shows the resulting approximation u=−119/9+10x. We can easily test other interpolation points, say x0=1 and x1=2. This changes the line quite significantly, see Figure (right).
Approximation of a parabola by linear functions computed by two interpolation points: 4/3 and 5/3 (left) versus 1 and 2 (right).
In the section Fourier series we explained the advantage of having a diagonal matrix: formulas for the coefficients {ci}i∈Is can then be derived by hand. For an interpolation (or collocation) method a diagonal matrix implies that ψj(xi)=0 if i≠j. One set of basis functions ψi(x) with this property is the Lagrange interpolating polynomials, or just Lagrange polynomials. (Although the functions are named after Lagrange, they were first discovered by Waring in 1779, rediscovered by Euler in 1783, and published by Lagrange in 1795.) Lagrange polynomials are key building blocks in the finite element method, so familiarity with these polynomials will be required anyway.
A Lagrange polynomial can be written as
for i∈Is. We see from (56) that all the ψi functions are polynomials of degree N which have the property
when xs is an interpolation (collocation) point. Here we have used the Kronecker delta symbol δis. This property implies that A is a diagonal matrix, i.e., Ai,j=0 for i≠j and Ai,j=1 when i=j. The solution of the linear system is then simply
and
We remark however that (57) does not necessarily imply that the matrix obtained by the least squares or projection methods is diagonal.
The following function computes the Lagrange interpolating polynomial
ψi(x) on the unit interval (0,1), given the interpolation points x0,…,xN in
the list or array points
:
def Lagrange_polynomial(x, i, points):
p = 1
for k in range(len(points)):
if k != i:
p *= (x - points[k])/(points[i] - points[k])
return p
The next function computes a complete basis, ψ0,…,ψN, using equidistant points throughout Ω:
def Lagrange_polynomials_01(x, N):
if isinstance(x, sym.Symbol):
h = sym.Rational(1, N-1)
else:
h = 1.0/(N-1)
points = [i*h for i in range(N)]
psi = [Lagrange_polynomial(x, i, points) for i in range(N)]
return psi, points
When x
is a sym.Symbol
object, we let the spacing between the
interpolation points, h
, be a sympy
rational number, so that we
get nice end results in the formulas for ψi. The other case,
when x
is a plain Python float
, signifies numerical computing, and
then we let h
be a floating-point number. Observe that the
Lagrange_polynomial
function works equally well in the symbolic and
numerical case - just think of x
being a sym.Symbol
object or a
Python float
. A little interactive session illustrates the
difference between symbolic and numerical computing of the basis
functions and points:
x = sym.Symbol('x')
psi, points = Lagrange_polynomials_01(x, N=2)
points
[0, 1]
psi
[1 - x, x]
x = 0.5 # numerical computing
psi, points = Lagrange_polynomials_01(x, N=2)
points
[0.0, 1.0]
psi
[0.5, 0.5]
That is, when used symbolically, the Lagrange_polynomials_01
function returns the symbolic expression for the Lagrange functions
while when x
is a numerical value the function returns the value of
the basis function evaluate in x
. In the present example only the
second basis function should be 1 in the mid-point while the others
are zero according to (57).
The Galerkin or least squares methods lead to an exact approximation if f lies in the space spanned by the basis functions. It could be of interest to see how the interpolation method with Lagrange polynomials as the basis is able to approximate a polynomial, e.g., a parabola. Running
x = sym.Symbol('x')
for N in 2, 4, 5, 6, 8, 10, 12:
f = x**2
psi, points = Lagrange_polynomials_01(x, N)
u = interpolation(f, psi, points)
shows the result that up to N=4
we achieve an exact approximation,
and then round-off errors start to grow, such that
N=15
leads to a 15-degree polynomial for u where
the coefficients in front of xr for r>2 are
of size 10−5 and smaller. As the matrix is ill-conditioned
and we use floating-point arithmetic, we do not obtain the exact
solution. Still, we get a solution that is visually identical to the
exact solution. The reason is that the ill-conditioning causes
the system to have many solutions very close to the true solution.
While we are lucky for N=15
and obtain a solution that is
visually identical to the true solution, ill-conditioning may also
result in completely wrong results. As we continue with higher values, N=20
reveals that the
procedure is starting to fall apart as the approximate solution is around 0.9 at x=1.0,
where it should have
been 1.0. At N=30
the approximate solution is around 5⋅108 at x=1.
Trying out the Lagrange polynomial basis for approximating f(x)=sin2πx on Ω=[0,1] with the least squares and the interpolation techniques can be done by
x = sym.Symbol('x')
f = sym.sin(2*sym.pi*x)
psi, points = Lagrange_polynomials_01(x, N)
Omega=[0, 1]
u, c = least_squares(f, psi, Omega)
comparison_plot(f, u, Omega)
u, c = interpolation(f, psi, points)
comparison_plot(f, u, Omega)
Figure shows the results. There is a difference between the least squares and the interpolation technique but the difference decreases rapidly with increasing N.
Approximation via least squares (left) and interpolation (right) of a sine function by Lagrange interpolating polynomials of degree 3.
The next example concerns interpolating f(x)=|1−2x| on Ω=[0,1] using Lagrange polynomials. Figure shows a peculiar effect: the approximation starts to oscillate more and more as N grows. This numerical artifact is not surprising when looking at the individual Lagrange polynomials. Figure shows two such polynomials, ψ2(x) and ψ7(x), both of degree 11 and computed from uniformly spaced points xi=i/11, i=0,…,11, marked with circles. We clearly see the property of Lagrange polynomials: ψ2(xi)=0 and ψ7(xi)=0 for all i, except ψ2(x2)=1 and ψ7(x7)=1. The most striking feature, however, is the dominating oscillation near the boundary where ψ2>5 and ψ7=−10 in some points. The reason is easy to understand: since we force the functions to be zero at so many points, a polynomial of high degree is forced to oscillate between the points. This is called Runge's phenomenon and you can read a more detailed explanation on Wikipedia.
The oscillations can be reduced by a more clever choice of interpolation points, called the Chebyshev nodes:
on the interval Ω=[a,b].
Here is a flexible version of the Lagrange_polynomials_01
function above,
valid for any interval Ω=[a,b] and with the possibility to generate
both uniformly distributed points and Chebyshev nodes:
def Lagrange_polynomials(x, N, Omega, point_distribution='uniform'):
if point_distribution == 'uniform':
if isinstance(x, sym.Symbol):
h = sym.Rational(Omega[1] - Omega[0], N)
else:
h = (Omega[1] - Omega[0])/float(N)
points = [Omega[0] + i*h for i in range(N+1)]
elif point_distribution == 'Chebyshev':
points = Chebyshev_nodes(Omega[0], Omega[1], N)
psi = [Lagrange_polynomial(x, i, points) for i in range(N+1)]
return psi, points
def Chebyshev_nodes(a, b, N):
from math import cos, pi
return [0.5*(a+b) + 0.5*(b-a)*cos(float(2*i+1)/(2*(N+1))*pi) \
for i in range(N+1)]
All the functions computing Lagrange polynomials listed
above are found in the module file Lagrange.py
.
Figure shows the improvement of using Chebyshev nodes, compared with the equidistant points in Figure. The reason for this improvement is that the corresponding Lagrange polynomials have much smaller oscillations, which can be seen by comparing Figure (Chebyshev points) with Figure (equidistant points). Note the different scale on the vertical axes in the two figures and also that the Chebyshev points tend to cluster more around the element boundaries.
Another cure for undesired oscillations of higher-degree interpolating polynomials is to use lower-degree Lagrange polynomials on many small patches of the domain. This is actually the idea pursued in the finite element method. For instance, linear Lagrange polynomials on [0,1/2] and [1/2,1] would yield a perfect approximation to f(x)=|1−2x| on Ω=[0,1] since f is piecewise linear.
Interpolation of an absolute value function by Lagrange polynomials and uniformly distributed interpolation points: degree 7 (left) and 14 (right).
Illustration of the oscillatory behavior of two Lagrange polynomials based on 12 uniformly spaced points (marked by circles).
Interpolation of an absolute value function by Lagrange polynomials and Chebyshev nodes as interpolation points: degree 7 (left) and 14 (right).
Illustration of the less oscillatory behavior of two Lagrange polynomials based on 12 Chebyshev points (marked by circles). Note that the y-axis is different from [Figure](#fem:approx:global:Lagrange:fig:abs:Lag:unif:osc).
How does the least squares or projection methods work with Lagrange
polynomials?
We can just call the least_squares
function, but
sympy
has problems integrating the f(x)=|1−2x|
function times a polynomial, so we need to fall back on numerical
integration.
def least_squares(f, psi, Omega):
N = len(psi) - 1
A = sym.zeros(N+1, N+1)
b = sym.zeros(N+1, 1)
x = sym.Symbol('x')
for i in range(N+1):
for j in range(i, N+1):
integrand = psi[i]*psi[j]
I = sym.integrate(integrand, (x, Omega[0], Omega[1]))
if isinstance(I, sym.Integral):
# Could not integrate symbolically, fall back
# on numerical integration with mpmath.quad
integrand = sym.lambdify([x], integrand, 'mpmath')
I = mpmath.quad(integrand, [Omega[0], Omega[1]])
A[i,j] = A[j,i] = I
integrand = psi[i]*f
I = sym.integrate(integrand, (x, Omega[0], Omega[1]))
if isinstance(I, sym.Integral):
integrand = sym.lambdify([x], integrand, 'mpmath')
I = mpmath.quad(integrand, [Omega[0], Omega[1]])
b[i,0] = I
c = A.LUsolve(b)
c = [sym.simplify(c[i,0]) for i in range(c.shape[0])]
u = sum(c[i]*psi[i] for i in range(len(psi)))
return u, c
Illustration of an approximation of the absolute value function using the least square method .
An alternative to the Taylor and Lagrange families of polynomials are the Bernstein polynomials. These polynomials are popular in visualization and we include a presentation of them for completeness. Furthermore, as we will demonstrate, the choice of basis functions may matter in terms of accuracy and efficiency. In fact, in finite element methods, a main challenge, from a numerical analysis point of view, is to determine appropriate basis functions for a particular purpose or equation.
On the unit interval, the Bernstein polynomials are defined in terms of powers of x and 1−x (the barycentric coordinates of the unit interval) as
The nine functions of a Bernstein basis of order 8.
The nine functions of a Lagrange basis of order 8.
The Figure shows the basis functions of a Bernstein basis of order 8. This figure should be compared against Figure, which shows the corresponding Lagrange basis of order 8. The Lagrange basis is convenient because it is a nodal basis, that is; the basis functions are 1 in their nodal points and zero at all other nodal points as described by (57). However, looking at Figure we also notice that the basis function are oscillatory and have absolute values that are significantly larger than 1 between the nodal points. Consider for instance the basis function represented by the purple color. It is 1 at x=0.5 and 0 at all other nodal points and hence this basis function represents the value at the mid-point. However, this function also has strong negative contributions close to the element boundaries where it takes negative values lower than −2. For the Bernstein basis, all functions are positive and all functions output values in [0,1]. Therefore there is no oscillatory behavior. The main disadvantage of the Bernstein basis is that the basis is not a nodal basis. In fact, all functions contribute everywhere except x=0 and x=1.
Notice.
We have considered approximation with a sinusoidal basis and with Lagrange or Bernstein polynomials, all of which are frequently used for scientific computing. The Lagrange polynomials (of various order) are extensively used in finite element methods, while the Bernstein polynomials are more used in computational geometry. The Lagrange and the Bernstein families are, however, but a few in the jungle of polynomial spaces used for finite element computations and their efficiency and accuracy can vary quite substantially. Furthermore, while a method may be efficient and accurate for one type of PDE it might not even converge for another type of PDE. The development and analysis of finite element methods for different purposes is currently an intense research field and has been so for several decades. FEniCS has implemented a wide range of polynomial spaces and has a general framework for implementing new elements. While finite element methods explore different families of polynomials, the so-called spectral methods explore the use of sinusoidal functions or polynomials with high order. From an abstract point of view, the different methods can all be obtained simply by changing the basis for the trial and test functions. However, their efficiency and accuracy may vary substantially.
All the concepts and algorithms developed for approximation of 1D functions f(x) can readily be extended to 2D functions f(x,y) and 3D functions f(x,y,z). Basically, the extensions consist of defining basis functions ψi(x,y) or ψi(x,y,z) over some domain Ω, and for the least squares and Galerkin methods, the integration is done over Ω.
As in 1D, the least squares and projection/Galerkin methods lead to linear systems
where the inner product of two functions f(x,y) and g(x,y) is defined completely analogously to the 1D case (25):
One straightforward way to construct a basis in 2D is to combine 1D basis functions. Say we have the 1D vector space
A similar space for a function's variation in y can be defined,
We can then form 2D basis functions as tensor products of 1D basis functions.
Tensor products.
Given two vectors a=(a0,…,aM) and b=(b0,…,bN), their outer tensor product, also called the dyadic product, is p=a⊗b, defined through
In the tensor terminology, a and b are first-order tensors (vectors with one index, also termed rank-1 tensors), and then their outer tensor product is a second-order tensor (matrix with two indices, also termed rank-2 tensor). The corresponding inner tensor product is the well-known scalar or dot product of two vectors: p=a⋅b=∑Nj=0ajbj. Now, p is a rank-0 tensor.
Tensors are typically represented by arrays in computer code. In the above example, a and b are represented by one-dimensional arrays of length M and N, respectively, while p=a⊗b must be represented by a two-dimensional array of size M×N.
Tensor products can be used in a variety of contexts.
Given the vector spaces Vx and Vy as defined in (65) and (66), the tensor product space V=Vx⊗Vy has a basis formed as the tensor product of the basis for Vx and Vy. That is, if {ψi(x)}i∈Ix and {ψi(y)}i∈Iy are basis for Vx and Vy, respectively, the elements in the basis for V arise from the tensor product: {ψi(x)ψj(y)}i∈Ix,j∈Iy. The index sets are Ix={0,…,Nx} and Iy={0,…,Ny}.
The notation for a basis function in 2D can employ a double index as in
The expansion for u is then written as a double sum
Alternatively, we may employ a single index,
and use the standard form for u,
The single index can be expressed in terms of the double index through i=p(Ny+1)+q or i=q(Nx+1)+p.
Let us again consider an approximation with the least squares method, but now with basis functions in 2D. Suppose we choose ˆψp(x)=xp, and try an approximation with Nx=Ny=1:
Using a mapping to one index like i=q(Nx+1)+p, we get
With the specific choice f(x,y)=(1+x2)(1+2y2) on Ω=[0,Lx]×[0,Ly], we can perform actual calculations:
The right-hand side vector has the entries
There is a general pattern in these calculations that we can explore. An arbitrary matrix entry has the formula
where
are matrix entries for one-dimensional approximations. Moreover, i=pNx+q and j=sNy+r.
With ˆψp(x)=xp we have
and
for p,r∈Ix and q,s∈Iy.
Corresponding reasoning for the right-hand side leads to
Choosing Lx=Ly=2, we have
Figure illustrates the result.
Approximation of a 2D quadratic function (left) by a 2D bilinear function (right) using the Galerkin or least squares method.
The calculations can also be done using sympy
. The following code computes the matrix of our example
import sympy as sym
x, y, Lx, Ly = sym.symbols("x y L_x L_y")
def integral(integrand):
Ix = sym.integrate(integrand, (x, 0, Lx))
I = sym.integrate(Ix, (y, 0, Ly))
return I
basis = [1, x, y, x*y]
A = sym.Matrix(sym.zeros(4,4))
for i in range(len(basis)):
psi_i = basis[i]
for j in range(len(basis)):
psi_j = basis[j]
A[i,j] = integral(psi_i*psi_j)
We remark that sympy
may even write the output in LaTeX or C++ format
using the functions latex
and ccode
.
The least_squares
function from
the section Orthogonal basis functions and/or the
file approx1D.py
can with very small modifications solve 2D approximation problems.
First, let Omega
now be a list of the intervals in x and y direction.
For example, Ω=[0,Lx]×[0,Ly] can be represented
by Omega = [[0, L_x], [0, L_y]]
.
Second, the symbolic integration must be extended to 2D:
# DO NOT RUN THIS CELL, THIS IS A DEMONSTRATION
integrand = psi[i]*psi[j]
I = sym.integrate(integrand,
(x, Omega[0][0], Omega[0][1]),
(y, Omega[1][0], Omega[1][1]))
provided integrand
is an expression involving the sympy
symbols x
and y
.
The 2D version of numerical integration becomes
# DO NOT RUN THIS CELL, THIS IS A DEMONSTRATION
if isinstance(I, sym.Integral):
integrand = sym.lambdify([x,y], integrand, 'mpmath')
I = mpmath.quad(integrand,
[Omega[0][0], Omega[0][1]],
[Omega[1][0], Omega[1][1]])
The right-hand side integrals are modified in a similar way.
(We should add that mpmath.quad
is sufficiently fast
even in 2D, but scipy.integrate.nquad
is much faster.)
Third, we must construct a list of 2D basis functions. Here are two examples based on tensor products of 1D "Taylor-style" polynomials xi and 1D sine functions sin((i+1)πx):
def taylor(x, y, Nx, Ny):
return [x**i*y**j for i in range(Nx+1) for j in range(Ny+1)]
def sines(x, y, Nx, Ny):
return [sym.sin(sym.pi*(i+1)*x)*sym.sin(sym.pi*(j+1)*y)
for i in range(Nx+1) for j in range(Ny+1)]
The complete code appears in
approx2D.py
.
The previous hand calculation where a quadratic f was approximated by a bilinear function can be computed symbolically by
from src.approx2D import *
f = (1+x**2)*(1+2*y**2)
psi = taylor(x, y, 1, 1)
Omega = [[0, 2], [0, 2]]
u = least_squares(f, psi, Omega)
print(u)
...evaluating matrix... (0,0) (0,1) (0,2) (0,3) (1,1) (1,2) (1,3) (2,2) (2,3) (3,3) ('A:\n', Matrix([ [4, 4, 4, 4], [4, 16/3, 4, 16/3], [4, 4, 16/3, 16/3], [4, 16/3, 16/3, 64/9]]), '\nb:\n', Matrix([ [308/9], [140/3], [ 44], [ 60]])) ('coeff:', [-1/9, 4/3, -2/3, 8]) ('approximation:', 8*x*y - 2*x/3 + 4*y/3 - 1/9) ('f:', 2*x**2*y**2 + x**2 + 2*y**2 + 1) 8*x*y - 2*x/3 + 4*y/3 - 1/9
print((sym.expand(f)))
2*x**2*y**2 + x**2 + 2*y**2 + 1
We may continue with adding higher powers to the basis:
psi = taylor(x, y, 2, 2)
u = least_squares(f, psi, Omega)
print(u)
...evaluating matrix... (0,0) (0,1) (0,2) (0,3) (0,4) (0,5) (0,6) (0,7) (0,8) (1,1) (1,2) (1,3) (1,4) (1,5) (1,6) (1,7) (1,8) (2,2) (2,3) (2,4) (2,5) (2,6) (2,7) (2,8) (3,3) (3,4) (3,5) (3,6) (3,7) (3,8) (4,4) (4,5) (4,6) (4,7) (4,8) (5,5) (5,6) (5,7) (5,8) (6,6) (6,7) (6,8) (7,7) (7,8) (8,8) ('A:\n', Matrix([ [ 4, 4, 16/3, 4, 4, 16/3, 16/3, 16/3, 64/9], [ 4, 16/3, 8, 4, 16/3, 8, 16/3, 64/9, 32/3], [16/3, 8, 64/5, 16/3, 8, 64/5, 64/9, 32/3, 256/15], [ 4, 4, 16/3, 16/3, 16/3, 64/9, 8, 8, 32/3], [ 4, 16/3, 8, 16/3, 64/9, 32/3, 8, 32/3, 16], [16/3, 8, 64/5, 64/9, 32/3, 256/15, 32/3, 16, 128/5], [16/3, 16/3, 64/9, 8, 8, 32/3, 64/5, 64/5, 256/15], [16/3, 64/9, 32/3, 8, 32/3, 16, 64/5, 256/15, 128/5], [64/9, 32/3, 256/15, 32/3, 16, 128/5, 256/15, 128/5, 1024/25]]), '\nb:\n', Matrix([ [ 308/9], [ 140/3], [ 3248/45], [ 44], [ 60], [ 464/5], [ 2992/45], [ 272/3], [31552/225]])) ('coeff:', [1, 0, 2, 0, 0, 0, 1, 0, 2]) ('approximation:', 2*x**2*y**2 + x**2 + 2*y**2 + 1) ('f:', 2*x**2*y**2 + x**2 + 2*y**2 + 1) 2*x**2*y**2 + x**2 + 2*y**2 + 1
print((u-f))
2*x**2*y**2 + x**2 + 2*y**2 - (x**2 + 1)*(2*y**2 + 1) + 1
For Nx≥2 and Ny≥2 we recover the exact function f, as expected, since in that case f∈V, see the section Perfect approximation.
Extension to 3D is in principle straightforward once the 2D extension is understood. The only major difference is that we need the repeated outer tensor product,
In general, given vectors (first-order tensors) a(q)=(a(q)0,…,a(q)Nq), q=0,…,m, the tensor product p=a(0)⊗⋯⊗am has elements
The basis functions in 3D are then
with p∈Ix, q∈Iy, r∈Iz. The expansion of u becomes
A single index can be introduced also here, e.g., i=rNxNy+qNx+p, u=∑iciψi(x,y,z).
Use of tensor product spaces.
Constructing a multi-dimensional space and basis from tensor products of 1D spaces is a standard technique when working with global basis functions. In the world of finite elements, constructing basis functions by tensor products is much used on quadrilateral and hexahedra cell shapes, but not on triangles and tetrahedra. Also, the global finite element basis functions are almost exclusively denoted by a single index and not by the natural tuple of indices that arises from tensor products.