In this lab we will explore condition numbers and error analysis. To do this, we will need to be able to generate random points.

A histogram counts the number of points in a vector that are inside bins, and can be used to plot the distribution that random points are drawn from. For example, `randn`

generates normal distributed random number, distributed like

In [106]:

```
using PyPlot
plt[:hist](randn(10000));
```

We can check the distribution is correct by comparing it to a plot of the true distribution, using the `normed=true`

option so that the bins are normalized to sum to one and the `bins=...`

option to specify more refined bins:

In [107]:

```
plt[:hist](randn(100000);bins=-5.:0.3:5.,normed=true);
x=-5.:0.1:5.
plot(x,exp(-x.^2/2)/sqrt(2π));
```

**Exercise 1(a)** Plot a histogram for the numbers generated by `rand`

. Can you guess the distribution?

In [ ]:

```
```

**Exercise 1(b)** Create a function `rande(ε,N)`

that returns `N`

points of magnitude at most `ε`

. Plot a histogram with `ε=0.01`

and `n=10000`

to verify your approach works.

In [ ]:

```
```

Consider the problem $f : \mathbb R \rightarrow \mathbb R$ with the absolute value norm attached defined by:

$$f(x) = x^2$$Recall the definition of *absolute condition number*

and *relative condition number*

**Exercise 2(a)** Use the random number generator from **1(b)** to estimate the absolute condition number around $x=1$, and $\epsilon =0.01$, by finding the maximum value of

(Hint: `maximum(v)`

returns the largest number in a vector `v`

.) Ensure that your approximation is accurate by increasing the number of random points `N`

used until you observe that the first *two* digits have converged.

In [ ]:

```
```

**Exercise 2(b)** Repeat **2(a)** with $x=2$ and $x=3$, and with $\epsilon =0.001$ and $\epsilon =0.0001$. Can you conjecture a formula for $\hat \kappa_f(x,0)$? (**Advanced**: Prove that this conjecture is true.)

In [ ]:

```
```

**Exercise 2(c)** Repeat parts **2(a)** and **2(c)** with the *relative* condition number. What formula do you conjecture for $\kappa_f(x,0)$? (**Advanced**: Prove that this conjecture is true.)

In [ ]:

```
```

**Exercise 2(d)** Repeat parts **2(a-c)** with $f(x)=x^3$. Conjecture a formula for the absolute and relative condition number when $f$ is a smooth function, in terms of the derivative of $f$. (**Advanced**: Prove that this conjecture is true.)

In [ ]:

```
```

We now consider the forward and backward error for floating point arithmetic. We will use `BigFloat`

to simulate exact arithmetic. (This is an approximation, but the errors will be much smaller so is justified.) Recall that we are interested in the *relative forward error*

and the *relative backward error*:

so that $\tilde f(x)=f(\Delta x)$.

The `BigFloat(x)`

command converts a number to a BigFloat format, which is a floating point number with a lot more bits (default is 256 bits, though `BigFloat`

can use even more if needed).

In [108]:

```
BigFloat(0.25)
```

Out[108]:

**Warning** Only convert numbers from `Float64`

if they are viewed as *exact*. The example below was fine, since 0.25 is representable exactly in `Float64`

. The following shows issues converting `1/3`

:

In [109]:

```
x64=1/3
x_wrong=BigFloat(x64) # x is only accurate to 52 base-2 bits, so the bits are incorrect
x=BigFloat(1)/BigFloat(3) # Both 1 and 3 are representable exactling as BigFloats
x_wrong,x # the latter is accurate to last base-2 bit
```

Out[109]:

**Exercise 3(a)** Calculate the relative forward error of approximating $f(x) = x^2$ by $\tilde f(x) = {\rm fl}(x) \otimes {\rm fl}(x)$ for $x=1/3$ in `Float64`

arithmetic, using `BigFloat`

to approximate the true $f(x)$.

In [ ]:

```
```

**Exercise 3(b)** Calculate numerically the $\delta$ so that $\tilde f(x) = x^2(1+\delta)$. Verify that

```
x^2*(1+δ)-x64^2
```

is zero.

In [ ]:

```
```

**Exercise 3(d)** Calculate the backward error `Δx`

of approximate $f$ by $\tilde f$ at $x = 1/3$. Verify that

```
(x+Δx)^2 - x64^2
```

is zero (to roughly 75 digits).

In [ ]:

```
```

Last lab we considered the Householder reflection constructed from a vector $\mathbf x$ via:

$\begin{align*} \mathbf w &= \mathbf x - \alpha \|{\mathbf x}\| \mathbf e_1, \qquad \alpha = \pm 1,\cr \mathbf v &= \mathbf w / \|\mathbf w\|,\cr Q_{\mathbf v} &\triangleq I - 2 \mathbf v \mathbf v^\top \end{align*}$

so that

$$Q_{\mathbf v}\mathbf x = \begin{pmatrix} \alpha \|\mathbf x\| \cr 0 \cr \vdots \cr 0\end{pmatrix}$$An oustanding question is the choice of $\alpha$: $\alpha = {\rm sign}(x_1)$ or $\alpha = -{\rm sign}(x_1)$. We will investigate this question by looking at the error of the problem

$$f(\mathbf x) = Q_{\mathbf v} \mathbf x$$where we know the exact value, versus the implementation on a computer in floating point arithmetic, which we denote $\tilde f(\mathbf x)$. Here we have $f, \tilde f : \mathbb R^n \rightarrow \mathbb R^n$ with the 2-norm attached.

**Exercise 4(a)** Write a function `house(s,x)`

that returns $Q_{\mathbf v}$ as definded above, with `\alpha = s*sign(x[1])`

.

In [ ]:

```
```

**Exercise 4(b)** Consider the vector $\mathbf x=[1.,\epsilon]$. Plot the forward error of $\|f(\mathbf x)-\tilde f(\mathbf x)\|$ for `s=1`

. (Hint: use a `loglog`

plot with `ε = 1./logspace(1,13,100)`

.)

In [ ]:

```
```

**4(c)** Repeat the previous exercise with and `s=-1`

. What do you conclude about the stability of the two choices?

In [ ]:

```
```

**3(d)** Backward error analysis for this problem is impossible. Why is that?