Lecture 29: The Euler–Maclaurin formula

In this lecture, we complete the proof of the Euler–Maclaurin formula. We also see that it can be used to accelerate the convergence of the Trapezium rule when derivatives at the boundary are known.

Recall the formula from Lecture 27:

$$\int_0^h \left[f(x) - \left(f(0) + x {f(h) - f(0) \over h}\right) \right] dx = - {f'(h) - f'(0) \over 12} h^2 +{h^2 \over 2} \int_0^h f''(x) B_2(x/h) dx.$$

But we have in general, using the fact that $B_j'(x) = j B_{j-1}(x)$ and $B_j = B_j(0) = B_j(1)$:

$$ \begin{align*} {h^{j-1} \over (j-1)!} \int_0^h f^{(j-1)}(x) B_{j-1}(x/h) dx &= {h^{j} \over j!} \int_0^h f^{(j-1)}(x) [B_{j}(x/h)]' dx = h^{j} {f^{(j-1)}(1)B_{j}(1) - f^{(j-1)}(0) B_{j}(0) \over j!} - {h^{j} \over (j-1)!} \int_0^h f^{(j)}(x) B_{j}(x/h) dx \cr & =h^{j} B_{j} {f^{(j-1)}(1) - f^{(j-1)}(0) \over j!} - {h^{j} \over j!} \int_0^h f^{(j)}(x) B_{j}(x/h) dx \end{align*} $$

Thus by induction we have (assuming $f$ is smooth)

$$\begin{align*} \int_0^h \left[f(x) - \left(f(0) + x {f(h) - f(0) \over h}\right) \right] dx &= - \sum_{j=2}^\alpha (-)^j h^j B_j{f^{(j-1)}(h) - f^{(j-1)}(0) \over j!} + (-)^\alpha {h^j \over j!} \int_0^h f^{(j)}(x) B_j(x/h) dx \cr & = - \sum_{j=2}^\alpha (-)^j h^j B_j{f^{(j-1)}(h) - f^{(j-1)}(0) \over j!} + O(h^{\alpha+1}). \end{align*} $$

(Bounding the integral term by $O(h)$ is left as a really easy exercise.)

Note that the odd $B_j$s are zero, a fact which we haven't proven.

Before we continue, let's double check that this formula is correct. Here is the error of the Trapezium rule for a moderate choice of $n$:

In [5]:
n=10

h=1/n

f=exp

ex=quadgk(f,0.,h)[1]
tr=(f(h)+f(0))/2*h

ex-tr
Out[5]:
-8.7627828134762e-5

The first error term predicts the error to quite large accuracy:

In [6]:
B2=(1/6)

-h^2*B2/2*(exp(h)-exp(0))  # is approximately ex - tr
Out[6]:
-8.764243172970644e-5

Indeed, taking the difference we get something of an order of magnitude smaller:

In [7]:
h=0.1

f=exp

ex=quadgk(f,0.,h)[1]

tr=(f(h)+f(0))/2*h

(ex-tr  - (-h^2*B2/2*(exp(h)-exp(0))))
Out[7]:
1.460359494444253e-8

Subtracting out the second error term does even better:

In [8]:
h=0.1

f=exp

ex=quadgk(f,0.,h)[1]

tr=(f(h)+f(0))/2*h



B4=-1/30


(ex-tr  - (-h^2*B2/2*(exp(h)-exp(0)) - h^4*B4/factorial(4)*(exp(h)-exp(0))))
Out[8]:
-3.477010508939199e-12

Trapezium rule over [0,1]

The previous gives the error term on the panel $[0,h]$. On the panel $[x_{k-1},x_{k}]$ we have the same thing:

$$\begin{align*} \int_{x_{k-1}}^{x_{k}} \left[ f(x) - \left(f(0) + x {f(h) - f(0) \over h}\right) \right] dx &= - \sum_{j=2}^\alpha (-)^j h^j B_j{f^{(j-1)}(x_k) - f^{(j-1)}(x_{k-1}) \over j!} + (-)^\alpha {h^j \over j!} \int_{x_{k-1}}^{x_{k}} f^{(j)}(x) B_j((x-x_{k-1})/h) dx \cr & = - \sum_{j=2}^\alpha (-)^j h^j B_j{f^{(j-1)}(x_k) - f^{(j-1)}(x_{k-1}) \over j!} + O(h^{\alpha+1}). \end{align*} $$

Thus summing over all $n$ panels we have

$$ \begin{align*} \int_0^1 f(x) dx - h\left({f(x_0) \over 2} + \sum_{k=1}^{n-1} f(x_k) + {f(x_n) \over 2} \right) &= -\sum_{k=1}^n \sum_{j=2}^\alpha \left[(-)^j h^j B_j{f^{(j-1)}(x_k) - f^{(j-1)}(x_{k-1})\over j!} + O(h^{\alpha+1})\right] \cr &=-\sum_{j=2}^\alpha {(-)^j h^j B_j \over j!} \sum_{k=1}^n \left[f^{(j-1)}(x_k) - f^{(j-1)}(x_{k-1})\right] + O(n^{-\alpha})\cr &=-\sum_{j=2}^\alpha {(-)^j h^j B_j \over j!} \ \left[f^{(j-1)}(1) - f^{(j-1)}(0)\right] + O(n^{-\alpha}) \end{align*} $$

where the last reduction used the fact that its a telescoping sum. (Note that we improve the error estimate by using this formula with a higher $\alpha$.)

Let's check that this error expression works:

In [9]:
h=0.1

f=exp

ex=quadgk(f,0.,1.)[1]

function trap(f,a,b,n)
    h=(b-a)/n
    x=linspace(a,b,n+1)

    v=f(x)
    h/2*v[1]+sum(v[2:end-1]*h)+h/2*v[end]
end
Out[9]:
trap (generic function with 1 method)

Indeed, subtracting out higher terms improves the accuracy:

In [10]:
n=10
h=1/n
tr=trap(f,0.,1.,n)

ex-tr   -  (-h^2*B2/2*(exp(1)-exp(0))- h^4*B4/factorial(4)*(exp(1)-exp(0)))
Out[10]:
-5.6807609831455164e-11
  1. This proves that Trapezoidal rule is $O(n^{-2})$
  2. This shows we can predict the error, and using derivatives, increase accuracy
  3. Periodic and infinitely differentiable functions have $O(n^{-\alpha})$ for all $\alpha$