In lectures we saw that the error in the Trapezium rule satisfies

$$\int_0^1 f(x) dx - Q_n[f] = -{1 \over 12 n^2}({f'(1) - f'(0)}) + O(n^{-4})$$where

$$Q_n[f] \triangleq {1 \over n}\left[{f(0) \over 2} + \sum_{j=1}^{n-1} f(x_j) + {f(1) \over 2}\right]$$and, as always, $x_k = k h$ for $h=1/n$.

This lecture explores ways we can use this formula to accelerate convergence when we don't know the derivatives of $f$.

**Warning** These are still not the best numerical integration schemes when one can choose the grid! But they are fairly powerful when we are restricted only knowing the data at evenly spaced points.

It may be convenient to use the `trap`

routine from lectures:

In [ ]:

```
function trap(f,a,b,n)
h=(b-a)/n
x=linspace(a,b,n+1)
v=f(x)
h/2*v[1]+sum(v[2:end-1])*h+h/2*v[end]
end;
```

The idea behind Romberg quadrature is to use *Richardson Extrapolation*, which is a general approach for convergence accelleration. Richardson extrapolation is based on the observation that the constant in the first term of the *does not* depend on $n$. That is, we have the error

where $\alpha= -{f'(1) - f'(0) \over 12}$ only depends on $f$. This means that, even if we don't know $\alpha$, we can cancel it out by evaluating the Trapezodial rule for different choices of $n$.

**Exercise 1(a)** What is the first term of the error for $Q_{2n}[f]$? Show that subtracting out this term does give $O(n^{-4})$, for $f(x) = e^x$.

In [ ]:

```
```

**Exercise 1(b)** Choose constants $A$ and $B$ independent of $n$ so that

(Hint: we clearly have $A+B =1$. What other condition is there? Reduce this to a 2 x 2 linear system.)

**Exercise 1(c)** Demonstrate that this does indeed work numerically.

In [ ]:

```
```

**Exercise 2(a)** What is the first and second term of the error for $Q_{n}[f]$, $Q_{2n}[f]$ $Q_{3n}[f]$?

**Exercise 2(b) Advanced** Set up a linear system to choose constants $A$, $B$, and $C$ independent of $n$ so that we have

Demonstrate this holds true numerically.

In [ ]:

```
```

The error quickly reaches close to machine epsilon, and so it's difficult to see the faster convegence. We can overcome this limitation using `BigFloat`

.

**Exercise 3(a)** We can call `quadgk`

with `BigFloat`

s:

```
quadgk(exp,BigFloat(0),BigFloat(1))
```

Verify this is correct to almost 76 digits, using the exact solution evaluated as a BigFloat.

In [ ]:

```
```

**Exercise 3(b)** How would you call `trap`

and have it return a `BigFloat`

?

In [ ]:

```
```

**Exercise 3(c)** Compare the error convergence of the unmodified Trapezium rule, 1(b) and 2(b) with `BigFloat`

, using a `loglog`

plot. (Hint: A `BigFloat`

divided by an integer is also a `BigFloat`

. Since `BigFloat`

is slow, avoid doing computations with n ≥ 1E4 or so. Make sure the errors are converted to `Float64`

before plotting.)

In [ ]:

```
```