**Abstract**: Bayesian formalisms deal with uncertainty in parameters,

We can motivate the introduction of probability by considering systems where there were more observations than unknowns. In particular we can consider the simple fitting of the gradient and an offset of a line, $$ \dataScalar = m\inputScalar +c. $$ What happens if we have three pairs of observations of $\inputScalar$ and $\dataScalar$, $\{\inputScalar_i, \dataScalar_i\}_{i=1}^3$. The issue can be solved by introducing a type of slack variable, $\noiseScalar_i$, known as noise, such that for each observation we had the equation, $$ \dataScalar_i = m\inputScalar_i + c + \noiseScalar_i. $$

What about the situation where you have more parameters than data in
your simultaneous equation? This is known as an *underdetermined*
system. In fact this set up is in some sense *easier* to solve, because
we don't need to think about introducing a slack variable (although it
might make a lot of sense from a *modelling* perspective to do so).

The way Laplace proposed resolving an overdetermined system, was to
introduce slack variables, $\noiseScalar_i$, which needed to be
estimated for each point. The slack variable represented the difference
between our actual prediction and the true observation. This is known as
the *residual*. By introducing the slack variable we now have an
additional $n$ variables to estimate, one for each data point,
$\{\noiseScalar_i\}$. This actually turns the overdetermined system into
an underdetermined system. Introduction of $n$ variables, plus the
original $m$ and $c$ gives us $\numData+2$ parameters to be estimated
from $n$ observations, which actually makes the system
*underdetermined*. However, we then made a probabilistic assumption
about the slack variables, we assumed that the slack variables were
distributed according to a probability density. And for the moment we
have been assuming that density was the Gaussian,
$$\noiseScalar_i \sim \gaussianSamp{0}{\dataStd^2},$$ with zero mean and
variance $\dataStd^2$.

The follow up question is whether we can do the same thing with the parameters. If we have two parameters and only one unknown can we place a probability distribution over the parameters, as we did with the slack variables? The answer is yes.

In [ ]:

```
import pods
from ipywidgets import IntSlider
```

In [ ]:

```
pods.notebook.display_plots('under_determined_system{samp:0>3}.svg',
directory='../slides/diagrams/ml', samp=IntSlider(0, 0, 10, 1))
```

Figure: *An underdetermined system can be fit by considering
uncertainty. Multiple solutions are consistent with one specified
point.*

From a philosophical perspective placing a probability distribution over
the *parameters* is known as the *Bayesian* approach. This is because
Thomas Bayes, in a 1763
essay
published at the Royal Society introduced the Bernoulli
distribution with
a probabilistic interpretation for the *parameters*. Later statisticians
such as Ronald Fisher
objected to the use of probability distributions for *parameters*, and
so in an effort to discredit the approach the referred to it as
Bayesian. However, the earliest practioners of modelling, such as
Laplace applied the approach as the most natural thing to do for dealing
with unknowns (whether they were parameters or variables).
Unfortunately, this dispute led to a split in the modelling community
that still has echoes today. It is known as the Bayesian vs Frequentist
controversy. From my own perspective, I think that it is a false
dichotomy, and that the two approaches are actually complementary. My
own focus research focus is on *modelling* and in that context, the use
of probability is vital. For frequenstist statisticians, such as Fisher,
the emphasis was on the value of the evidence in the data for a
particular hypothesis. This is known as hypothesis testing. The two
approaches can be unified because one of the most important approaches
to hypothesis testing is to compute the ratio of the
likelihoods, and
the result of applying a probability distribution to the parameters is
merely to arrive at a different form of the likelihood.

A segment from the lecture in 2012 on philsophical underpinnings.

In [ ]:

```
from IPython.lib.display import YouTubeVideo
YouTubeVideo('AvlnFnvFw_0')
```

Figure: *The philosophical underpinnings of uncertainty, as discussed
in 2012 MLAI lecture.*

\addreading{@Bishop:book06}{Section 1.2.3 (pg 21–24)} )} \addreading{@Rogers:book11}{Sections 3.1-3.4 (pg 95-117)} \addreading{@Bishop:book06}{Section 1.2.3 (pg 21–24)} \addreading{@Bishop:book06}{Section 1.2.6 (start from just past eq 1.64 pg 30-32)}

\reading

In the overdetermined system we introduced a new set of slack variables, $\{\noiseScalar_i\}_{i=1}^\numData$, on top of our parameters $m$ and $c$. We dealt with the variables by placing a probability distribution over them. This gives rise to the likelihood and for the case of Gaussian distributed variables, it gives rise to the sum of squares error. It was Gauss who first made this connection in his volume on "Theoria Motus Corprum Coelestium" (written in Latin)

In [ ]:

```
import pods
```

In [ ]:

```
pods.notebook.display_google_book(id='ORUOAAAAQAAJ', page='213')
```

The relevant section roughly translates as

... It is clear, that for the product $\Omega = h^\mu \pi^{-frac{1}{2}\mu} e^{-hh(vv + v^\prime v^\prime + v^{\prime\prime} v^{\prime\prime} + \dots)}$ to be maximised the sum $vv + v ^\prime v^\prime + v^{\prime\prime} v^{\prime\prime} + \text{etc}.$ ought to be minimized.

Therefore, the most probable values of the unknown quantities $p , q, r , s \text{etc}.$, should be that in which the sum of the squares of the differences between the functions $V, V^\prime, V^{\prime\prime} \text{etc}$, and the observed values is minimized, for all observations of the same degree of precision is presumed.

It's on the strength of this paragraph that the density is known as the Gaussian, despite the fact that four pages later Gauss credits the necessary integral for the density to Laplace, and it was also Laplace that did a lot of the original work on dealing with these errors through probability. Stephen Stigler's book on the measurement of uncertainty before 1900 has a nice chapter on this.

In [ ]:

```
pods.notebook.display_google_book(id='ORUOAAAAQAAJ', page='217')
```

where the crediting to the Laplace is about halfway through the last paragraph. This book was published in 1809, four years after \refnotes{Legendre presented least squares}{linear-regression} in an appendix to one of his chapters on the orbit of comets. Gauss goes on to make a claim for priority on the method on page 221 (towards the end of the first paragraph ...).

In [ ]:

```
pods.notebook.display_google_book(id='ORUOAAAAQAAJ', page='221')
```

Now we will study Bayesian approaches to regression. In the Bayesian
approach we define a *prior* density over our parameters, $m$ and $c$ or
more generally $\mappingVector$. This prior distribution gives us a
range of expected values for our parameter *before* we have seen the
data. The object in Bayesian inference is to then compute the*posterior*
density which is the effect on the density of having observed the data.
In standard probability notation we write the prior distribution as, $$
p(\mappingVector),
$$ so it is the *marginal* distribution for the parameters, i.e. the
distribution we have for the parameters without any knowledge about the
data. The posterior distribution is written as, $$
p(\mappingVector|\dataVector, \inputMatrix).
$$ So the posterior distribution is the *conditional* distribution for
the parameters given the data (which in this case consists of pairs of
observations including response variables (or targets), $\dataScalar_i$,
and covariates (or inputs) $\inputVector_i$. Where here we are allowing
the inputs to be multivariate.

The posterior is recovered from the prior using *Bayes' rule*. Which is
simply a rewriting of the product rule. We can recover Bayes' rule as
follows. The product rule of probability tells us that the joint
distribution is given as the product of the conditional and the
marginal. Dropping the inputs from our conditioning for the moment we
have, $$
p(\mappingVector, \dataVector)=p(\dataVector|\mappingVector)p(\mappingVector),
$$ where we see we have related the joint density to the prior density
and the *likelihood* from our previous investigation of regression, $$
p(\dataVector|\mappingVector) = \prod_{i=1}^\numData\gaussianDist{\dataScalar_i}{\mappingVector^\top \inputVector_i}{ \dataStd^2}
$$ which arises from the assumption that our observation is given by $$
\dataScalar_i = \mappingVector^\top \inputVector_i + \noiseScalar_i.
$$ In other words this is the Gaussian likelihood we have been fitting
by minimizing the sum of squares. Have a look at the session on
multivariate regression as a reminder.

We've introduce the likelihood, but we don't have relationship with the
posterior, however, the product rule can also be written in the
following way $$
p(\mappingVector, \dataVector) = p(\mappingVector|\dataVector)p(\dataVector),
$$ where here we have simply used the opposite conditioning. We've
already introduced the *posterior* density above. This is the density
that represents our belief about the parameters *after* observing the
data. This is combined with the *marginal likelihood*, sometimes also
known as the evidence. It is the marginal likelihood, because it is the
original likelihood of the data with the parameters marginalised,
$p(\dataVector)$. Here it's conditioned on nothing, but in practice you
should always remember that everything here is conditioned on things
like model choice: which set of basis functions. Because it's a
regression problem, its also conditioned on the inputs. Using the
equalitybetween the two different forms of the joint density we recover
$$
p(\mappingVector|\dataVector) = \frac{p(\dataVector|\mappingVector)p(\mappingVector)}{p(\dataVector)}
$$ where we divided both sides by $p(\dataVector)$ to recover this
result. Let's re-introduce the conditioning on the input locations (or
covariates), $\inputMatrix$ to write the full form of Bayes' rule for
the regression problem. $$
p(\mappingVector|\dataVector, \inputMatrix) = \frac{p(\dataVector|\mappingVector, \inputMatrix)p(\mappingVector)}{p(\dataVector|\inputMatrix)}
$$ where the posterior density for the parameters given the data is
$p(\mappingVector|\dataVector, \inputMatrix)$, the marginal likelihood
is $p(\dataVector|\inputMatrix)$, the prior density is
$p(\mappingVector)$ and our original regression likelihood is given by
$p(\dataVector|\mappingVector, \inputMatrix)$. It turns out that to
compute the posterior the only things we need to do are define the prior
and the likelihood. The other term on the right hand side can be
computed by *the sum rule*. It is one of the key equations of Bayesian
inference, the expectation of the likelihood under the prior, this
process is known as marginalisation, $$
p(\dataVector|\inputMatrix) = \int p(\dataVector|\mappingVector,\inputMatrix)p(\mappingVector) \text{d}\mappingVector
$$ I like the term marginalisation, and the description of the
probability as the *marginal likelihood*, because (for me) it somewhat
has the implication that the variable name has been removed, and
(perhaps) written in the margin. Marginalisation of a variable goes from
a likelihood where the variable is in place, to a new likelihood where
all possible values of that variable (under the prior) have been
considered and weighted in the integral. This implies that all we need
for specifying our model is to define the likelihood and the prior. We
already have our likelihood from our earlier discussion, so our focus
now turns to the prior density.

The tradition in Bayesian inference is to place a probability density over the parameters of interest in your model. This choice is made regardless of whether you generally believe those parameters to be stochastic or deterministic in origin. In other words, to a Bayesian, the modelling treatment does not differentiate between epistemic and aleatoric uncertainty. For linear regression we could consider the following Gaussian prior on the intercept parameter, $$c \sim \gaussianSamp{0}{\alpha_1}$$ where $\alpha_1$ is the variance of the prior distribution, its mean being zero.

The prior distribution is combined with the likelihood of the data given
the parameters $p(\dataScalar|c)$ to give the posterior via *Bayes'
rule*, $$
p(c|\dataScalar) = \frac{p(\dataScalar|c)p(c)}{p(\dataScalar)}
$$ where $p(\dataScalar)$ is the marginal probability of the data,
obtained through integration over the joint density,
$p(\dataScalar, c)=p(\dataScalar|c)p(c)$. Overall the equation can be
summarized as, $$
\text{posterior} = \frac{\text{likelihood}\times \text{prior}}{\text{marginal likelihood}}.
$$

In [ ]:

```
from ipywidgets import IntSlider
import pods
```

In [ ]:

```
pods.notebook.display_plots('dem_gaussian{stage:0>2}.svg',
diagrams='../slides/diagrams/ml',
stage=IntSlider(1, 1, 3, 1))
```

Figure: *Combining a Gaussian likelihood with a Gaussian prior to form
a Gaussian posterior*

Another way of seeing what's going on is to note that the numerator of Bayes' rule merely multiplies the likelihood by the prior. The denominator, is not a function of $c$. So the functional form is entirely determined by the multiplication of prior and likelihood. This has the effect of ensuring that the posterior only has probability mass in regions where both the prior and the likelihood have probability mass.

The marginal likelihood, $p(\dataScalar)$, operates to ensure that the distribution is normalised.

For the Gaussian case, the normalisation of the posterior can be
performed analytically. This is because both the prior and the
likelihood have the form of an *exponentiated quadratic*, $$
\exp(a^2)\exp(b^2) = \exp(a^2 + b^2),
$$ and the properties of the exponential mean that the product of two
exponentiated quadratics is also an exponentiated quadratic. That
implies that the posterior is also Gaussian, because a normalized
exponentiated quadratic is a Gaussian distribution.[^1]

complete the square of the quadratic form to obtain $$\log p(c | \dataVector, \inputVector, m, \dataStd^2) = -\frac{1}{2\tau^2}(c - \mu)^2 +\text{const},$$ where $\tau^2 = \left(\numData\dataStd^{-2} +\alpha_1^{-1}\right)^{-1}$ and $\mu = \frac{\tau^2}{\dataStd^2} \sum_{i=1}^\numData(\dataScalar_i-m\inputScalar_i)$.

- Really want to know the
*joint*posterior density over the parameters $c$*and*$m$. - Could now integrate out over $m$, but it's easier to consider the multivariate case.

Consider the distribution of height (in meters) of an adult male human population. We will approximate the marginal density of heights as a Gaussian density with mean given by $1.7\text{m}$ and a standard deviation of $0.15\text{m}$, implying a variance of $\dataStd^2=0.0225$, $$ p(h) \sim \gaussianSamp{1.7}{0.0225}. $$ Similarly, we assume that weights of the population are distributed a Gaussian density with a mean of $75 \text{kg}$ and a standard deviation of $6 kg$ (implying a variance of 36), $$ p(w) \sim \gaussianSamp{75}{36}. $$

Figure: *Gaussian distributions for height and weight.*

First of all, we make an independence assumption, we assume that height and weight are independent. The definition of probabilistic independence is that the joint density, $p(w, h)$, factorizes into its marginal densities, $$ p(w, h) = p(w)p(h). $$ Given this assumption we can sample from the joint distribution by independently sampling weights and heights.

In [ ]:

```
import pods
from ipywidgets import IntSlider
```

In [ ]:

```
pods.notebook.display_plots('independent_height_weight{fig:0>3}.svg',
directory='../slides/diagrams/ml',
fig=IntSlider(0, 0, 7, 1))
```

Figure: *Samples from independent Gaussian variables that might
represent heights and weights.*

In reality height and weight are *not* independent. Taller people tend
on average to be heavier, and heavier people are likely to be taller.
This is reflected by the *body mass index*. A ratio suggested by one of
the fathers of statistics, Adolphe Quetelet. Quetelet was interested in
the notion of the *average man* and collected various statistics about
people. He defined the BMI to be, $$
\text{BMI} = \frac{w}{h^2}
$$To deal with this dependence we now introduce the notion of
*correlation* to the multivariate Gaussian density.

In [ ]:

```
import pods
from ipywidgets import IntSlider
```

In [ ]:

```
pods.notebook.display_plots('correlated_height_weight{fig:0>3}.svg',
directory='../slides/diagrams/ml',
fig=IntSlider(0, 0, 7, 1))
```

Figure: *Samples from *correlated* Gaussian variables that might
represent heights and weights.*

Form correlated from original by rotating the data space using matrix $\rotationMatrix$.

$$ p(\dataVector) = \frac{1}{\det{2\pi\mathbf{D}}^{\frac{1}{2}}} \exp\left(-\frac{1}{2}(\dataVector - \meanVector)^\top\mathbf{D}^{-1}(\dataVector - \meanVector)\right) $$$$ p(\dataVector) = \frac{1}{\det{2\pi\mathbf{D}}^{\frac{1}{2}}} \exp\left(-\frac{1}{2}(\rotationMatrix^\top\dataVector - \rotationMatrix^\top\meanVector)^\top\mathbf{D}^{-1}(\rotationMatrix^\top\dataVector - \rotationMatrix^\top\meanVector)\right) $$$$ p(\dataVector) = \frac{1}{\det{2\pi\mathbf{D}}^{\frac{1}{2}}} \exp\left(-\frac{1}{2}(\dataVector - \meanVector)^\top\rotationMatrix\mathbf{D}^{-1}\rotationMatrix^\top(\dataVector - \meanVector)\right) $$this gives a covariance matrix: $$ \covarianceMatrix^{-1} = \rotationMatrix \mathbf{D}^{-1} \rotationMatrix^\top $$

$$ p(\dataVector) = \frac{1}{\det{2\pi\covarianceMatrix}^{\frac{1}{2}}} \exp\left(-\frac{1}{2}(\dataVector - \meanVector)^\top\covarianceMatrix^{-1} (\dataVector - \meanVector)\right) $$this gives a covariance matrix: $$ \covarianceMatrix = \rotationMatrix \mathbf{D} \rotationMatrix^\top $$

Let's assume that the prior density is given by a zero mean Gaussian, which is independent across each of the parameters, $$ \mappingVector \sim \gaussianSamp{\zerosVector}{\alpha \eye} $$ In other words, we are assuming, for the prior, that each element of the parameters vector, $\mappingScalar_i$, was drawn from a Gaussian density as follows $$ \mappingScalar_i \sim \gaussianSamp{0}{\alpha} $$ Let's start by assigning the parameter of the prior distribution, which is the variance of the prior distribution, $\alpha$.

In [ ]:

```
# set prior variance on w
alpha = 4.
# set the order of the polynomial basis set
order = 5
# set the noise variance
sigma2 = 0.01
```

\addreading{@Bishop:book06}{Multivariate Gaussians: Section 2.3 up to top of pg 85} \addreading{@Bishop:book06}{Section 3.3 up to 159 (pg 152–159)} \reading

A very important aspect of probabilistic modelling is to *sample* from
your model to see what type of assumptions you are making about your
data. In this case that involves a two stage process.

- Sample a candiate parameter vector from the prior.
- Place the candidate parameter vector in the likelihood and sample functions conditiond on that candidate vector.
- Repeat to try and characterise the type of functions you are generating.

Given a prior variance (as defined above) we can now sample from the
prior distribution and combine with a basis set to see what assumptions
we are making about the functions *a priori* (i.e. before we've seen the
data). Firstly we compute the basis function matrix. We will do it both
for our training data, and for a range of prediction locations
(`x_pred`

).

In [ ]:

```
import numpy as np
import pods
```

In [ ]:

```
data = pods.datasets.olympic_marathon_men()
x = data['X']
y = data['Y']
num_data = x.shape[0]
num_pred_data = 100 # how many points to use for plotting predictions
x_pred = np.linspace(1890, 2016, num_pred_data)[:, None] # input locations for predictions
```

now let's build the basis matrices. We define the polynomial basis as follows.

In [ ]:

```
def polynomial(x, num_basis=2, loc=0., scale=1.):
degree=num_basis-1
degrees = np.arange(degree+1)
return ((x-loc)/scale)**degrees
```

In [ ]:

```
import mlai
```

In [ ]:

```
loc=1950
scale=1
degree=4
basis = mlai.Basis(polynomial, number=degree+1, loc=loc, scale=scale)
Phi_pred = basis.Phi(x_pred)
Phi = basis.Phi(x)
```

Now we will sample from the prior to produce a vector $\mappingVector$
and use it to plot a function which is representative of our belief
*before* we fit the data. To do this we are going to use the properties
of the Gaussian density and a sample from a *standard normal* using the
function `np.random.normal`

.

First, let's consider the case where we have one data point and one feature in our basis set. In otherwords $\mappingFunctionVector$ would be a scalar, $\mappingVector$ would be a scalar and $\basisMatrix$ would be a scalar. In this case we have $$ \mappingFunction = \basisScalar \mappingScalar $$ If $\mappingScalar$ is drawn from a normal density, $$ \mappingScalar \sim \gaussianSamp{\meanScalar_\mappingScalar}{c_\mappingScalar} $$ and $\basisScalar$ is a scalar value which we are given, then properties of the Gaussian density tell us that $$ \basisScalar \mappingScalar \sim \gaussianSamp{\basisScalar\meanScalar_\mappingScalar}{\basisScalar^2c_\mappingScalar} $$ Let's test this out numerically. First we will draw 200 samples from a standard normal,

In [ ]:

```
w_vec = np.random.normal(size=200)
```

We can compute the mean of these samples and their variance

In [ ]:

```
print('w sample mean is ', w_vec.mean())
print('w sample variance is ', w_vec.var())
```

These are close to zero (the mean) and one (the variance) as you'd expect. Now compute the mean and variance of the scaled version,

In [ ]:

```
phi = 7
f_vec = phi*w_vec
print('True mean should be phi*0 = 0.')
print('True variance should be phi*phi*1 = ', phi*phi)
print('f sample mean is ', f_vec.mean())
print('f sample variance is ', f_vec.var())
```

If you increase the number of samples then you will see that the sample
mean and the sample variance begin to converge towards the true mean and
the true variance. Obviously adding an offset to a sample from
`np.random.normal`

will change the mean. So if you want to sample from a
Gaussian with mean `mu`

and standard deviation `sigma`

one way of doing
it is to sample from the standard normal and scale and shift the result,
so to sample a set of $\mappingScalar$ from a Gaussian with mean
$\meanScalar$ and variance $\alpha$,
$$\mappingScalar \sim \gaussianSamp{\meanScalar}{\alpha}$$ We can simply
scale and offset samples from the *standard normal*.

In [ ]:

```
mu = 4 # mean of the distribution
alpha = 2 # variance of the distribution
w_vec = np.random.normal(size=200)*np.sqrt(alpha) + mu
print('w sample mean is ', w_vec.mean())
print('w sample variance is ', w_vec.var())
```

Here the `np.sqrt`

is necesssary because we need to multiply by the
standard deviation and we specified the variance as `alpha`

. So scaling
and offsetting a Gaussian distributed variable keeps the variable
Gaussian, but it effects the mean and variance of the resulting
variable.

To get an idea of the overall shape of the resulting distribution, let's do the same thing with a histogram of the results.

In [ ]:

```
%matplotlib inline
```

Now re-run this histogram with 100,000 samples and check that the both histograms look qualitatively Gaussian.

Let's use this way of constructing samples from a Gaussian to check what
functions look like *a priori*. The process will be as follows. First,
we sample a random vector $K$ dimensional from `np.random.normal`

. Then
we scale it by $\sqrt{\alpha}$ to obtain a prior sample of
$\mappingVector$.

In [ ]:

```
K = degree + 1
z_vec = np.random.normal(size=K)
w_sample = z_vec*np.sqrt(alpha)
print(w_sample)
```

Now we can combine our sample from the prior with the basis functions to create a function,

This shows the recurring problem with the polynomial basis (note the scale on the left hand side!). Our prior allows relatively large coefficients for the basis associated with high polynomial degrees. Because we are operating with input values of around 2000, this leads to output functions of very high values. The fix we have used for this before is to rescale our data before we apply the polynomial basis to it. Above, we set the scale of the basis to 1. Here let's set it to 100 and try again.

In [ ]:

```
scale = 100.
basis = mlai.Basis(polynomial, number=degree+1, loc=loc, scale=scale)
Phi_pred = basis.Phi(x_pred)
Phi = basis.Phi(x)
```

Now we need to recompute the basis functions from above,

Now let's loop through some samples and plot various functions as samples from this system,

The predictions for the mean output can now be computed. We want the
expected value of the predictions under the posterior distribution. In
matrix form, the predictions can be computed as $$
\mappingFunctionVector = \basisMatrix \mappingVector.
$$ This involves a matrix multiplication between a fixed matrix
$\basisMatrix$ and a vector that is drawn from a distribution
$\mappingVector$. Because $\mappingVector$ is drawn from a distribution,
this imples that $\mappingFunctionVector$ should also be drawn from a
distribution. There are two distributions we are interested in though.
We have just been sampling from the *prior* distribution to see what
sort of functions we get *before* looking at the data. In Bayesian
inference, we need to computer the *posterior* distribution and sample
from that density.

We will now attampt to compute the *posterior distribution*. In the
lecture we went through the maths that allows us to compute the
posterior distribution for $\mappingVector$. This distribution is also
Gaussian, $$
p(\mappingVector | \dataVector, \inputVector, \dataStd^2) = \gaussianDist{\mappingVector}{\meanVector_\mappingScalar}{\covarianceMatrix_\mappingScalar}
$$ with covariance, $\covarianceMatrix_\mappingScalar$, given by $$
\covarianceMatrix_\mappingScalar = \left(\dataStd^{-2}\basisMatrix^\top \basisMatrix + \alpha^{-1}\eye\right)^{-1}
$$ whilst the mean is given by $$
\meanVector_\mappingScalar = \covarianceMatrix_\mappingScalar \dataStd^{-2}\basisMatrix^\top \dataVector
$$ Let's compute the posterior covariance and mean, then we'll sample
from these densities to have a look at the posterior belief about
$\mappingVector$ once the data has been accounted for. Remember, the
process of Bayesian inference involves combining the prior,
$p(\mappingVector)$ with the likelihood,
$p(\dataVector|\inputVector, \mappingVector)$ to form the posterior,
$p(\mappingVector | \dataVector, \inputVector)$ through Bayes' rule, $$
p(\mappingVector|\dataVector, \inputVector) = \frac{p(\dataVector|\inputVector, \mappingVector)p(\mappingVector)}{p(\dataVector)}
$$ We've looked at the samples for our function
$\mappingFunctionVector = \basisMatrix\mappingVector$, which forms the
mean of the Gaussian likelihood, under the prior distribution. I.e.
we've sampled from $p(\mappingVector)$ and multiplied the result by the
basis matrix. Now we will sample from the posterior density,
$p(\mappingVector|\dataVector, \inputVector)$, and check that the new
samples fit do correspond to the data, i.e. we want to check that the
updated distribution includes information from the data set. First we
need to compute the posterior mean and *covariance*.

This video talks about Bayesian inference across the single parameter, the offset $c$, illustrating how the prior and the likelihood combine in one dimension to form a posterior.

In [ ]:

```
from IPython.lib.display import YouTubeVideo
YouTubeVideo('AvlnFnvFw_0')
```

Figure: *Univariate Bayesian inference. Lecture 10 from 2012 MLAI
Course.*

This section of the lecture talks about how we extend the idea of Bayesian inference for the multivariate case. It goes through the multivariate Gaussian and how to complete the square in the linear algebra as we managed below.

In [ ]:

```
from IPython.lib.display import YouTubeVideo
YouTubeVideo('Os1iqgpelPw')
```

Figure: *Multivariate Bayesian inference. Lecture 11 from 2012 MLAI
course.*

The lecture informs us the the posterior density for $\mappingVector$ is given by a Gaussian density with covariance $$ \covarianceMatrix_w = \left(\dataStd^{-2}\basisMatrix^\top \basisMatrix + \alpha^{-1}\eye\right)^{-1} $$ and mean $$ \meanVector_w = \covarianceMatrix_w\dataStd^{-2}\basisMatrix^\top \dataVector. $$

Compute the covariance for $\mappingVector$ given the training data,
call the resulting variable `w_cov`

. Compute the mean for
$\mappingVector$ given the training data. Call the resulting variable
`w_mean`

. Assume that $\dataStd^2 = 0.01$

*10 marks*

In [ ]:

```
# Write code for your answer to Question 1 in this box
# provide the answers so that the code runs correctly otherwise you will loose marks!
sigma2 =
w_cov =
w_mean =
```

In [ ]:

```
import numpy as np
import pods
```

In [ ]:

```
data = pods.datasets.olympic_marathon_men()
x = data['X']
y = data['Y']
offset = y.mean()
scale = np.sqrt(y.var())
```

In [ ]:

```
import matplotlib.pyplot as plt
import teaching_plots as plot
import mlai
```

In [ ]:

```
xlim = (1875,2030)
ylim = (2.5, 6.5)
yhat = (y-offset)/scale
fig, ax = plt.subplots(figsize=plot.big_wide_figsize)
_ = ax.plot(x, y, 'r.',markersize=10)
ax.set_xlabel('year', fontsize=20)
ax.set_ylabel('pace min/km', fontsize=20)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
mlai.write_figure(figure=fig,
filename='../slides/diagrams/datasets/olympic-marathon.svg',
transparent=True,
frameon=True)
```

Figure: *Olympic marathon pace times since 1892.*

Things to notice about the data include the outlier in 1904, in this year, the olympics was in St Louis, USA. Organizational problems and challenges with dust kicked up by the cars following the race meant that participants got lost, and only very few participants completed.

More recent years see more consistently quick marathons.

Five fold cross validation tests the ability of the model to
*interpolate*.

In [ ]:

```
import mlai
import pods
```

In [ ]:

```
import pods
from ipywidgets import IntSlider
```

In [ ]:

```
pods.notebook.display_plots('olympic_BLM_polynomial_number{num_basis:0>3}.svg',
directory='../slides/diagrams/ml/',
num_basis=IntSlider(1, 1, 27, 1))
```

Figure: *Bayesian fit with 26th degree polynomial and negative
marginal log likelihood.*

For the polynomial fit, we will now look at *hold out* validation, where
we are holding out some of the most recent points. This tests the abilit
of our model to *extrapolate*.

In [ ]:

```
import pods
from ipywidgets import IntSlider
```

In [ ]:

```
pods.notebook.display_plots('olympic_val_BLM_polynomial_number{num_basis:0>3}.svg',
directory='../slides/diagrams/ml',
num_basis=IntSlider(1, 1, 27, 1))
```

Figure: *Bayesian fit with 26th degree polynomial and hold out
validation scores.*

Five fold cross validation tests the ability of the model to
*interpolate*.

In [ ]:

```
import pods
from ipywidgets import IntSlider
```

In [ ]:

```
pods.notebook.display_plots('olympic_5cv{part:0>2}_BLM_polynomial_number{num_basis:0>3}.svg',
directory='../slides/diagrams/ml',
part=(0, 5),
num_basis=IntSlider(1, 1, 27, 1))
```

Figure: *Bayesian fit with 26th degree polynomial and five fold cross
validation scores.*

Before we were able to sample the prior values for the mean
*independently* from a Gaussian using `np.random.normal`

and scaling the
result. However, observing the data *correlates* the parameters. Recall
this from the first lab where we had a correlation between the offset,
$c$ and the slope $m$ which caused such problems with the coordinate
ascent algorithm. We need to sample from a *correlated* Gaussian. For
this we can use `np.random.multivariate_normal`

.

Now let's sample several functions and plot them all to see how the predictions fluctuate.

This gives us an idea of what our predictions are. These are the predictions that are consistent with data and our prior. Try plotting different numbers of predictions. You can also try plotting beyond the range of where the data is and see what the functions do there.

Rather than sampling from the posterior each time to compute our predictions, it might be better if we just summarised the predictions by the expected value of the output funciton, $\mappingFunction(x)$, for any particular input. If we can get formulae for this we don't need to sample the values of $\mappingFunction(x)$ we might be able to compute the distribution directly. Fortunately, in the Gaussian case, we can use properties of multivariate Gaussians to compute both the mean and the variance of these samples.

Gaussian variables have very particular properties, that many other densities don't exhibit. Perhaps foremost amoungst them is that the sum of any Gaussian distributed set of random variables also turns out to be Gaussian distributed. This property is much rarer than you might expect.

The sum of Gaussian random variables is also Gaussian, so if we have a
random variable $\dataScalar_i$ drawn from a Gaussian density with mean
$\meanScalar_i$ and variance $\dataStd^2_i$, $$
\dataScalar_i \sim \gaussianSamp{\meanScalar_i}{\dataStd^2_i}
$$ Then the sum of $K$ independently sampled values of $\dataScalar_i$
will be drawn from a Gaussian with mean $\sum_{i=1}^K \mu_i$ and
variance $\sum_{i=1}^K \dataStd_i^2$, $$
\sum_{i=1}^K \dataScalar_i \sim \gaussianSamp{\sum_{i=1}^K \meanScalar_i}{\sum_{i=1}^K \dataStd_i^2}.
$$ Let's try that experimentally. First let's generate a vector of
samples from a standard normal distribution,
$z \sim \gaussianSamp{0}{1}$, then we will scale and offset them, then
keep adding them into a vector `y_vec`

.

In [ ]:

```
K = 10 # how many Gaussians to add.
num_samples = 1000 # how many samples to have in y_vec
mus = np.linspace(0, 5, K) # mean values generated linearly spaced between 0 and 5
sigmas = np.linspace(0.5, 2, K) # sigmas generated linearly spaced between 0.5 and 2
y_vec = np.zeros(num_samples)
for mu, sigma in zip(mus, sigmas):
z_vec = np.random.normal(size=num_samples) # z is from standard normal
y_vec += z_vec*sigma + mu # add to y z*sigma + mu
# now y_vec is the sum of each scaled and off set z.
print('Sample mean is ', y_vec.mean(), ' and sample variance is ', y_vec.var())
print('True mean should be ', mus.sum())
print('True variance should be ', (sigmas**2).sum(), ' standard deviation ', np.sqrt((sigmas**2).sum()))
```

Of course, we can histogram `y_vec`

as well.

We are interested in what our model is saying about the sort of
functions we are observing. The fact that summing of Gaussian variables
leads to new Gaussian variables, and scaling of Gaussian variables
*also* leads to Gaussian variables means that matrix multiplication
(which is just a series of sums and scales) also leads to Gaussian
densities. Matrix multiplication is just adding and scaling together, in
the formula, $\mappingFunctionVector = \basisMatrix \mappingVector$ we
can extract the first element from $\mappingFunctionVector$ as $$
\mappingFunction_i = \basisVector_i^\top \mappingVector
$$ where $\basisVector$ is a column vector from the $i$th row of
$\basisMatrix$ and $\mappingFunction_i$ is the $i$th element of
$\mappingFunctionVector$.This vector inner product itself merely implies
that $$
\mappingFunction_i = \sum_{j=1}^K \mappingScalar_j \basisScalar_{i, j}
$$ and if we now say that $\mappingScalar_i$ is Gaussian distributed,
then because a scaled Gaussian is also Gaussian, and because a sum of
Gaussians is also Gaussian, we know that $\mappingFunction_i$ is also
Gaussian distributed. It merely remains to work out its mean and
covariance. We can do this by looking at the expectation under a
Gaussian distribution. The expectation of the mean vector is given by $$
\expDist{\mappingFunctionVector}{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}} = \int \mappingFunctionVector
\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}
\text{d}\mappingVector = \int \basisMatrix\mappingVector
\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}
\text{d}\mappingVector = \basisMatrix \int \mappingVector
\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}
\text{d}\mappingVector = \basisMatrix \meanVector
$$

Which is straightforward. The expectation of
$\mappingFunctionVector=\basisMatrix\mappingVector$ under the Gaussian
distribution for $\mappingFunctionVector$ is simply
$\mappingFunctionVector=\basisMatrix\meanVector$, where $\meanVector$ is
the *mean* of the Gaussian density for $\mappingVector$. Because our
prior distribution was Gaussian with zero mean, the expectation under
the prior is given by $$
\expDist{\mappingFunctionVector}{\gaussianDist{\mappingVector}{\zerosVector}{\alpha\eye}} = \zerosVector
$$

The covariance is a little more complicated. A covariance matrix is defined as $$ \text{cov}\left(\mappingFunctionVector\right)_{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}} = \expDist{\mappingFunctionVector\mappingFunctionVector^\top}{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}}

- \expDist{\mappingFunctionVector}{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}}\expDist{\mappingFunctionVector}{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}}^\top $$ we've already computed $\expDist{\mappingFunctionVector}{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}}=\basisMatrix \meanVector$ so we can substitute that in to recover $$ \text{cov}\left(\mappingFunctionVector\right)_{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}} = \expDist{\mappingFunctionVector\mappingFunctionVector^\top}{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}} - \basisMatrix \meanVector \meanVector^\top \basisMatrix^\top $$

So we need the expectation of $\mappingFunctionVector\mappingFunctionVector^\top$. Substituting in $\mappingFunctionVector = \basisMatrix \mappingVector$ we have $$ \text{cov}\left(\mappingFunctionVector\right)_{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}} = \expDist{\basisMatrix\mappingVector\mappingVector^\top \basisMatrix^\top}{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}} - \basisMatrix \meanVector \meanVector^\top \basisMatrix^\top $$ $$ \text{cov}\left(\mappingFunctionVector\right)_{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}} = \basisMatrix\expDist{\mappingVector\mappingVector^\top}{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}} \basisMatrix^\top - \basisMatrix \meanVector \meanVector^\top\basisMatrix^\top $$ Which is dependent on the second moment of the Gaussian, $$ \expDist{\mappingVector\mappingVector^\top}{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}} = \covarianceMatrix + \meanVector\meanVector^\top $$ that can be substituted in to recover, $$ \text{cov}\left(\mappingFunctionVector\right)_{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}} = \basisMatrix\covarianceMatrix \basisMatrix^\top $$ so in the case of the prior distribution, where we have $\covarianceMatrix = \alpha \eye$ we can write $$ \text{cov}\left(\mappingFunctionVector\right)_{\gaussianDist{\mappingVector}{\zerosVector}{\alpha \eye}} = \alpha \basisMatrix \basisMatrix^\top $$

This implies that the prior we have suggested for $\mappingVector$,
which is Gaussian with a mean of zero and covariance of $\alpha \eye$
suggests that the distribution for $\mappingVector$ is also Gaussian
with a mean of zero and covariance of
$\alpha \basisMatrix\basisMatrix^\top$. Since our observed output,
$\dataVector$, is given by a noise corrupted variation of
$\mappingFunctionVector$, the final distribution for $\dataVector$ is
given as $$
\dataVector = \mappingFunctionVector + \noiseVector
$$ where the noise, $\noiseVector$, is sampled from a Gaussian density:
$\noiseVector \sim \gaussianSamp{\zerosVector}{\dataStd^2\eye}$. So, in
other words, we are taking a Gaussian distributed random value
$\mappingFunctionVector$, $$
\mappingFunctionVector \sim \gaussianSamp{\zerosVector}{\alpha\basisMatrix\basisMatrix^\top}
$$ and adding to it another Gaussian distributed value,
$\noiseVector \sim \gaussianSamp{\zerosVector}{\dataStd^2\eye}$, to form
our data observations, $\dataVector$. Once again the sum of two
(multivariate) Gaussian distributed variables is also Gaussian, with a
mean given by the sum of the means (both zero in this case) and the
covariance given by the sum of the covariances. So we now have that the
marginal likelihood for the data, $p(\dataVector)$ is given by $$
p(\dataVector) = \gaussianDist{\dataVector}{\zerosVector}{\alpha \basisMatrix \basisMatrix^\top + \dataStd^2\eye}
$$ This is our *implicit* assumption for $\dataVector$ given our prior
assumption for $\mappingVector$.

The marginal likelihood can also be computed, it has the form: $$ p(\dataVector|\inputMatrix, \dataStd^2, \alpha) = \frac{1}{(2\pi)^\frac{n}{2}\left|\kernelMatrix\right|^\frac{1}{2}} \exp\left(-\frac{1}{2} \dataVector^\top \kernelMatrix^{-1} \dataVector\right) $$ where $\kernelMatrix = \alpha \basisMatrix\basisMatrix^\top + \dataStd^2 \eye$.

So it is a zero mean $\numData$-dimensional Gaussian with covariance matrix $\kernelMatrix$.

These ideas together, now allow us to compute the mean and error bars of the predictions. The mean prediction, before corrupting by noise is given by, $$ \mappingFunctionVector = \basisMatrix\mappingVector $$ in matrix form. This gives you enough information to compute the predictive mean.

Compute the predictive mean for the function at all the values of the
basis function given by `Phi_pred`

. Call the vector of predictions
`\mappingFunction_pred_mean`

. Plot the predictions alongside the data.
We can also compute what the training error was. Use the output from
your model to compute the predictive mean, and then compute the sum of
squares error of that predictive mean. $$
E = \sum_{i=1}^\numData (\dataScalar_i - \langle \mappingFunction_i\rangle)^2
$$ where $\langle \mappingFunction_i\rangle$ is the expected output of
the model at point $\inputScalar_i$.

*15 marks*

In [ ]:

```
# Write code for your answer to Question 2 in this box
# provide the answers so that the code runs correctly otherwise you will loose marks!
# compute mean under posterior density
f_pred_mean =
# plot the predictions
# compute mean at the training data and sum of squares error
f_mean =
sum_squares =
print('The error is: ', sum_squares)
```

Finally, we can compute error bars for the predictions. The error bars are the standard deviations of the predictions for $\mappingFunctionVector=\basisMatrix\mappingVector$ under the posterior density for $\mappingVector$. The standard deviations of these predictions can be found from the variance of the prediction at each point. Those variances are the diagonal entries of the covariance matrix. We've already computed the form of the covariance under Gaussian expectations, $$ \text{cov}\left(\mappingFunctionVector\right)_{\gaussianDist{\mappingVector}{\meanVector}{\covarianceMatrix}} = \basisMatrix\covarianceMatrix \basisMatrix^\top $$ which under the posterior density is given by $$ \text{cov}\left(\mappingFunctionVector\right)_{\gaussianDist{\mappingVector}{\meanVector_w}{\covarianceMatrix_w}} = \basisMatrix\covarianceMatrix_w \basisMatrix^\top $$

The error bars are given by computing the standard deviation of the predictions, $\mappingFunction$. For a given prediction $\mappingFunction_i$ the variance is $\text{var}(\mappingFunction_i) = \langle \mappingFunction_i^2\rangle - \langle \mappingFunction_i \rangle^2$. This is given by the diagonal element of the covariance of $\mappingFunctionVector$, $$ \text{var}(\mappingFunction_i) = \basisVector_{i, :}^\top \covarianceMatrix_w \basisVector_{i, :} $$ where $\basisVector_{i, :}$ is the basis vector associated with the input location, $\inputVector_i$.

Plot the mean function and the error bars for your basis.

*20 marks*

In [ ]:

```
# Write code for your answer to Question 3 in this box
# provide the answers so that the code runs correctly otherwise you will loose marks!
# Compute variance at function values
f_pred_var =
f_pred_std =
# plot the mean and error bars at 2 standard deviations above and below the mean
```

Now we will test the generalisation ability of these models. Firstly we are going to use hold out validation to attempt to see which model is best for extrapolating.

Now split the data into training and *hold out* validation sets. Hold
out the data for years after 1980. Compute the predictions for different
model orders between 0 and 8. Find the model order which fits best
according to *hold out* validation. Is it the same as the maximum
likelihood result fom last week?

*25 marks*

In [ ]:

```
# Write code for your answer to Question 4 in this box
# provide the answers so that the code runs correctly otherwise you will loose marks!
```

Now we will use leave one out cross validation to attempt to see which model is best at interpolating. Do you get the same result as for hold out validation? Compare plots of the hold out validation area for different degrees and the cross validation error for different degrees. Why are they so different? Select a suitable polynomial for characterising the differences in the predictions. Plot the mean function and the error bars for the full data set (to represent the leave one out solution) and the training data from the hold out experiment. Discuss your answer.

*30 marks*

In [ ]:

```
# Write code for your answer to Question 5 in this box
# provide the answers so that the code runs correctly otherwise you will loose marks!
```

\addreading{@Rogers:book11}{Section 3.7–3.8 (pg 122–133)} \addreading{@Bishop:book06}{Section 3.4 (pg 161–165)} \reading

- Gold medal times for Olympic Marathon since 1896.
- Marathons before 1924 didn't have a standardised distance.
- Present results using pace per km.
- In 1904 Marathon was badly organised leading to very slow times.
</td> |
Image from Wikimedia Commons |