The previous two chapters hid the inner-mechanics of PyMC, and more generally Markov Chain Monte Carlo (MCMC), from the reader. The reason for including this chapter is three-fold. The first is that any book on Bayesian inference must discuss MCMC. I cannot fight this. Blame the statisticians. Secondly, knowing the process of MCMC gives you insight into whether your algorithm has converged. (Converged to what? We will get to that) Thirdly, we'll understand *why* we are returned thousands of samples from the posterior as a solution, which at first thought can be odd.

When we setup a Bayesian inference problem with $N$ unknowns, we are implicitly creating an $N$ dimensional space for the prior distributions to exist in. Associated with the space is an additional dimension, which we can describe as the *surface*, or *curve*, that sits on top of the space, that reflects the *prior probability* of a particular point. The surface on the space is defined by our prior distributions. For example, if we have two unknowns $p_1$ and $p_2$, and priors for both are $\text{Uniform}(0,5)$, the space created is a square of length 5 and the surface is a flat plane that sits on top of the square (representing that every point is equally likely).

In [2]:

```
%pylab inline
import scipy.stats as stats
figsize(12.5,4)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
x = y = np.linspace(0,5, 100 )
X, Y = np.meshgrid(x, y)
subplot(121)
uni_x = stats.uniform.pdf( x,loc=0, scale = 5)
uni_y = stats.uniform.pdf(x, loc=0, scale = 5)
M = np.dot( uni_x[:,None], uni_y[None,:] )
im = plt.imshow(M, interpolation='none', origin='lower',
cmap=cm.jet, vmax=1, vmin = -.15, extent=(0,5,0,5))
plt.xlim( 0, 5)
plt.ylim( 0, 5)
plt.title( "Landscape formed by Uniform priors." )
ax = fig.add_subplot(122, projection='3d')
ax.plot_surface(X, Y, M, cmap=cm.jet, vmax=1, vmin = -.15 )
ax.view_init( azim = 390)
plt.title( "Uniform prior landscape; alternate view");
```

Alternatively, if the two priors are $\text{Exp}(3)$ and $\text{Exp}(10)$, then the space is all positive numbers on the 2-D plane, and the surface induced by the priors looks like a water fall that starts at the point (0,0) and flows over the positive numbers.

The plots below visualize this. The more dark red the color, the more prior probability is assigned to that location. Conversely, areas with darker blue represent that our priors assign very low probability to that location.

In [22]:

```
figsize( 12.5,5 )
fig = plt.figure()
subplot(121)
exp_x = stats.expon.pdf( x, scale = 3)
exp_y = stats.expon.pdf(x, scale = 10)
M = np.dot( exp_x[:,None], exp_y[None,:] )
CS = plt.contour( X, Y, M )
im = plt.imshow(M, interpolation='none', origin='lower',
cmap=cm.jet, extent=(0,5,0,5))
#plt.xlabel( "prior on $p_1$")
#plt.ylabel( "prior on $p_2$")
plt.title( "$Exp(3), Exp(10)$ prior landscape" )
ax = fig.add_subplot(122, projection='3d')
ax.plot_surface(X, Y, M, cmap=cm.jet )
ax.view_init( azim = 390)
plt.title( "$Exp(3), Exp(10)$ prior landscape; \nalternate view")
```

These are simple examples in 2D space, where our brains can understand surfaces well. In practice, spaces and surfaces generated by our priors can be much higher dimensional.

If these surfaces describe our *prior distributions* on the unknowns, what happens to our space after we incorporate our observed data $X$? The data $X$ does not change the space, but it changes the surface of the space by *pulling and stretching the fabric of the prior surface* to reflect where the true parameters likely live. More data means more pulling and stretching, and our original shape becomes mangled or insignificant compared to the newly formed shape. Less data, and our original shape is more present. Regardless, the resulting surface describes the *posterior distribution*.

Again I must stress that it is, unfortunately, impossible to visualize this in large dimensions. For two dimensions, the data essentially *pushes up* the original surface to make *tall mountains*. The tendency of the observed data to *push up* the posterior probability in certain areas is checked by the prior probability distribution, so that less prior probability means more resistance. Thus in the double-exponential prior case above, a mountain (or multiple mountains) that might erupt near the (0,0) corner would be much higher than mountains that erupt closer to (5,5), since there is more resistance (low prior probability) near (5,5). The peak reflects the posterior probability of where the true parameters are likely to be found. Importantly, if the prior has assigned a probability of 0, then no posterior probability will be assigned there.

Suppose the priors mentioned above represent different parameters $\lambda$ of two Poisson distributions. We observe a few data points and visualize the new landscape:

In [20]:

```
### create the observed data
#sample size of data we observe
N = 1
#the true parameters, but of course we do not see these values...
lambda_1_true = 1
lambda_2_true = 3
#...we see the data generated, dependent on the above two values.
data = np.concatenate( [
stats.poisson.rvs( lambda_1_true, size = (N,1)),
stats.poisson.rvs( lambda_2_true, size = (N,1))
], axis=1 )
print "observed (2-dimensional,sample size = %d):"%N, data
#plotting details.
x = y = np.linspace(.01,5, 100 )
likelihood_x = np.array( [stats.poisson.pmf( data[:,0], _x) for _x in x]).prod(axis=1)
likelihood_y = np.array( [stats.poisson.pmf( data[:,1], _y) for _y in y]).prod(axis=1)
L = np.dot( likelihood_x[:,None], likelihood_y[None,:] )
```

In [21]:

```
figsize( 12.5,12 )
subplot(221)
uni_x = stats.uniform.pdf( x,loc=0, scale = 5)
uni_y = stats.uniform.pdf(x, loc=0, scale = 5)
M = np.dot( uni_x[:,None], uni_y[None,:] )
im = plt.imshow(M, interpolation='none', origin='lower',
cmap=cm.jet, vmax=1, vmin = -.15, extent=(0,5,0,5))
plt.scatter( lambda_1_true, lambda_2_true, c = "k", s = 50 )
plt.xlim( 0, 5)
plt.ylim( 0, 5)
plt.title( "Landscape formed by Uniform priors on $p_1, p_2$." )
subplot(223)
plt.contour( X, Y, M*L )
im = plt.imshow(M*L, interpolation='none', origin='lower',
cmap=cm.jet, extent=(0,5,0,5))
plt.title( "Landscape warped by data observations;\n Uniform priors on $p_1, p_2$." )
plt.scatter( lambda_1_true, lambda_2_true, c = "k", s = 50 )
plt.xlim( 0, 5)
plt.ylim( 0, 5)
subplot(222)
exp_x = stats.expon.pdf( x,loc=0, scale = 3)
exp_y = stats.expon.pdf(x, loc=0, scale = 10)
M = np.dot( exp_x[:,None], exp_y[None,:] )
plt.contour( X, Y, M )
im = plt.imshow(M, interpolation='none', origin='lower',
cmap=cm.jet, extent=(0,5,0,5))
plt.scatter( lambda_1_true, lambda_2_true, c = "k", s = 50 )
plt.xlim( 0, 5)
plt.ylim( 0, 5)
plt.title( "Landscape formed by Exponential priors on $p_1, p_2$." )
subplot(224)
plt.contour( X, Y, M*L )
im = plt.imshow(M*L, interpolation='none', origin='lower',
cmap=cm.jet, extent=(0,5,0,5))
plt.scatter( lambda_1_true, lambda_2_true, c = "k", s = 50 )
plt.title( "Landscape warped by data observations;\n Exponential priors on \
$p_1, p_2$." )
plt.xlim( 0, 5)
plt.ylim( 0, 5);
```

The plot on the left is the deformed landscape with the $\text{Uniform}(0,5)$ priors, and the plot on the right is the deformed landscape with the exponential priors. Notice that the posterior landscapes look different from one another, though the data observed is identical in both cases. The reason is as follows. The exponential-prior landscape, on the right, puts very little posterior weight on values in the upper right corner: this is because *the prior does not put much weight there*, whereas the uniform-prior landscape is happy to put posterior weight there. Also, the highest-point, corresponding the the darkest red, is biased towards (0,0) in the exponential case, which is the result from the exponential prior putting more prior weight in the (0,0) corner.

The black dot represents the true parameters. Even with 1 sample point, the mountains attempts to contain the true parameter. Of course, inference with a sample size of 1 is incredibly naive, and choosing such a small sample size was only illustrative. Try observing how to mountain changes as you vary the sample size.

We should explore the deformed posterior space generated by our prior surface and observed data to find the posterior mountain. However, we cannot naively search the space: any computer scientist will tell you that traversing $N$-dimensional space is exponentially difficult in $N$: the size of the space quickly blows-up as we increase $N$ (see the curse of dimensionality ). What hope do we have to find these hidden mountains? The idea behind MCMC is to perform an intelligent search of the space. To say "search" implies we are looking for a particular object, which is perhaps not an accurate description of what MCMC is doing. Recall: MCMC returns *samples* from the posterior distribution, not the distribution itself. Stretching our mountainous analogy to its limit, MCMC performs a task similar to repeatedly asking "How likely is this pebble I found to be from the mountain I am searching for?", and completes its task by returning thousands of accepted pebbles in hopes of reconstructing the original mountain. In MCMC and PyMC lingo, the returned sequence of "pebbles" are the samples, more often called the *traces*.

When I say MCMC intelligently searches, I mean MCMC will *hopefully* converge towards the areas of high posterior probability. MCMC does this by exploring nearby positions and moving into areas with higher probability. Again, perhaps "converge" is not an accurate term to describe MCMC's progression. Converging usually implies moving towards a point in space, but MCMC moves towards a *broader area* in the space and randomly walks in that area, picking up samples from that area.

At first, returning thousands of samples to the user might sound like being an inefficient way to describe the posterior distributions. I would argue that this is extremely efficient. Consider the alternative possibilities::

- Returning a mathematical formula for the "mountain ranges" would involve describing a N-dimensional surface with arbitrary peaks and valleys.
- Returning the "peak" of the landscape, while mathematically possible and a sensible thing to do as the highest point corresponds to most probable estimate of the unknowns, ignores the shape of the landscape, which we have previously argued is very important in determining posterior confidence in unknowns.

Besides computational reasons, likely the strongest reason for returning samples is that we can easily use *The Law of Large Numbers* to solve otherwise intractable problems. I postpone this discussion for the next chapter. With the thousands of samples, we can reconstruct the posterior surface by organizing them in a histogram.

There is a large family of algorithms that perform MCMC. Most of these algorithms can be expressed at a high level as follows: (Mathematical details can be found in the appendix. )

```
1. Start at current position.
2. Propose moving to a new position (investigate a pebble near you).
3. Accept/Reject the new position based on the position's adherence to the data
and prior distributions (ask if the pebble likely came from the mountain).
4. - If you accept: Move to the new position. Return to Step 1.
- Else: Do not move to new position. Return to Step 1.
5. After a large number of iterations, return the positions.
```

This way we move in the general direction towards the regions where the posterior distributions exist, and collect samples sparingly on the journey. Once we reach the posterior distribution, we can easily collect samples as they likely all belong to the posterior distribution.

If the current position of the MCMC algorithm is in an area of extremely low probability, which is often the case when the algorithm begins (typically at a random location in the space), the algorithm will move in positions *that are likely not from the posterior* but better than everything else nearby. Thus the first moves of the algorithm are not reflective of the posterior.

In the above algorithm's pseudocode, notice that only the current position matters (new positions are investigated only near the current position). We can describe this property as *memorylessness*, i.e. the algorithm does not care *how* it arrived at it's current position, only that it is there.

Besides MCMC, there are other procedures available for determining the posterior distributions. A Laplace approximation is an approximation of the posterior using simple functions. A more advanced method is Variational Bayes.

Suppose we are given the following dataset:

In [112]:

```
figsize( 12.5, 4)
data = np.loadtxt( "data/mixture_data.csv", delimiter="," )
hist( data, bins = 20, color = "k" )
plt.title("Histogram of the dataset" )
print data[ :10 ], "..."
```

What does the data suggest? It appears the data has a bimodal form, that is, it appears to have two peaks, one near 120 and the other near 200. Perhaps there are *two clusters* within this dataset.

This dataset is a good example of the data-generation modeling technique from last chapter. We can propose *how* the data might have been created. I suggest the following data generation algorithm:

- For each data point, choose cluster 1 with probability $p$, else choose cluster 2.
- Draw a random variate from a Normal distribution with parameters $\mu_i$ and $\sigma_i$ where $i$ was chosen in step 1.
- Repeat.

This algorithm would create a similar effect as the observed dataset, so we choose this as our model. Of course, we do not know $p$ or the parameters of the Normal distributions. Hence we must infer, or *learn*, these unknowns.

Denote the Normal distributions $\text{Nor}_0$ and $\text{Nor}_1$ (having variables' index start at 0 is just Pythonic). Both currently have unknown mean and standard deviation, denoted $\mu_i$ and $\sigma_i, \; i =0,1$ respectively. A specific data point can be from either $\text{Nor}_0$ or $\text{Nor}_1$, and we assume that the data point is assigned to $\text{Nor}_0$ with probability $p$.

An appropriate way to assign data points to clusters is to use a PyMC `Categorical`

stochastic variable. Its parameter is a $k$-length array of probabilities that must sum to one and its `value`

attribute is a integer between 0 and $k-1$ randomly chosen according to the crafted array of probabilities. (In our case $k=2$) *A priori*, we do not know what the probability of assignment to cluster 1 is, so we create a uniform variable over 0,1 to model this. Call this `p`

. Thus the probability array we enter into the `Categorical`

variable is `[p, 1-p]`

.

In [93]:

```
import pymc as mc
p = mc.Uniform( "p", 0, 1)
assignment = mc.Categorical("assignment", [p, 1-p], size = data.shape[0] )
print "prior assignment, with p = %.2f:"%p.value
print assignment.value[ :10 ], "..."
```

Looking at the above dataset, I would guess that the standard deviations of the two Normals are different. To maintain ignorance of what the standard deviations might be, we will initially model them as uniform on 0 to 100. Really we are talking about $\tau$, the *precision* of the Normal distribution, but it is easier to think in terms of standard deviation. Our PyMC code will need to transform our standard deviation into precision by the relation:

$$ \tau = \frac{1}{\sigma^2} $$

In PyMC, we can do this in one step by writing:

```
taus = 1.0/mc.Uniform( "stds", 0, 100, size= 2)**2
```

Notice that we specified `size=2`

: we are modeling both $\tau$s as a single PyMC variable. Note that is does not induce a necessary relationship between the two $\tau$s, it is simply for succinctness.

We also need to specify priors on the centers of the clusters. The centers are really the $\mu$ parameters in this Normal distributions. Their priors can be modeled by a Normal distribution. Looking at the data, I have an idea where the two centers might be — I would guess somewhere around 120 and 190 respectively, though I am not very confident in these eyeballed estimates. Hence I will set $\mu_0 = 120, \mu_1 = 190$ and $\sigma_{0,1} = 10$ (recall we enter the $\tau$ parameter, so enter $1/\sigma^2 = 0.01$ in the PyMC variable.)

In [94]:

```
taus = 1.0/mc.Uniform( "stds", 0, 100, size= 2)**2
centers = mc.Normal( "centers", [120, 190], [0.01, 0.01], size =2 )
"""
The below deterministic functions map a assignment, in this case 0 or 1,
to a set of parameters, located in the (1,2) arrays `taus` and `centers.`
"""
@mc.deterministic
def center_i( assignment = assignment, centers = centers ):
return centers[ assignment]
@mc.deterministic
def tau_i( assignment = assignment, taus = taus ):
return taus[ assignment]
print "Random assignments: ", assignment.value[ :4 ], "..."
print "Assigned center: ", center_i.value[ :4], "..."
print "Assigned precision: ", tau_i.value[ :4], "..."
```

In [95]:

```
#and to combine it with the observations:
observations = mc.Normal( "obs", center_i, tau_i, value = data, observed = True )
#below we create a model class
model = mc.Model( [p, assignment, taus, centers ] )
```

PyMC has an MCMC class, `MCMC`

in the main namespace of PyMC, that implements the MCMC exploring algorithm. We initialize it by passing in a `Model`

instance:

```
mcmc = mc.MCMC( model )
```

The method for asking the `MCMC`

to explore the space is `sample( iterations )`

, where `iterations`

is the number of steps you wish the algorithm to perform. We try 50000 steps below:

In [96]:

```
mcmc = mc.MCMC( model )
mcmc.sample( 50000 )
```

Below I plot the paths, or "traces", the unknown parameters (centers, precisions, and $p$) have taken thus far. The traces can be retrieved using the `trace`

method in the `MCMC`

object created, which accepts the assigned PyMC variable `name`

. For example, `mcmc.trace("centers")`

will retrieve a `Trace`

object that can be indexed (using `[:]`

or `.gettrace()`

to retrieve all traces, or fancy-indexing like `[1000:]`

).

In [99]:

```
figsize( 12.5, 9 )
subplot(311)
lw = 1
center_trace = mcmc.trace("centers")[:]
#for pretty colors later in the book.
colors = ["#348ABD", "#A60628"] if center_trace[-1,0] > center_trace[-1,1] \
else [ "#A60628", "#348ABD"]
plot( center_trace[:,0], label = "trace of center 0", c = colors[0], lw = lw )
plot( center_trace[:,1], label = "trace of center 1", c = colors[1], lw = lw )
plt.title( "Traces of unknown parameters" )
leg = plt.legend(loc = "upper right")
leg.get_frame().set_alpha(0.7)
subplot(312)
std_trace = mcmc.trace("stds")[:]
plot( std_trace[:,0], label = "trace of standard deviation of cluster 0",
c = colors[0], lw = lw )
plot( std_trace[:,1], label = "trace of standard deviation of cluster 1",
c = colors[1], lw = lw )
plt.legend(loc = "upper left")
subplot(313)
p_trace = mcmc.trace("p")[:]
plot( p_trace, label = "$p$: frequency of assignment to cluster 0",
color = "#467821", lw = 1)
plt.xlabel( "Steps" )
plt.ylim(0,1)
plt.legend();
```

Notice the following characteristics:

- The traces converges, not to a single point, but to a
*distribution*of possible points. This is*convergence*in an MCMC algorithm. - Inference using the first few thousand points is a bad idea, as they are unrelated to the final distribution we are interested in. Thus is it a good idea to discard those samples before using the samples for inference. We call this period before converge the
*burn-in period*. - The traces appear as a random "walk" around the space, that is, the paths exhibit correlation with previous positions. This is both good and bad. We will always have correlation between current positions and the previous positions, but too much of it means we are not exploring the space well. This will be detailed in the Diagnostics section later in this chapter.

To achieve further convergence, we will perform more MCMC steps. Starting the MCMC again after it has already been called does not mean starting the entire algorithm over. In the pseudo-code algorithm of MCMC above, the only position that matters is the current position (new positions are investigated near the current position), implicitly stored in PyMC variables' `value`

attribute. Thus it is fine to halt an MCMC algorithm and inspect its progress, with the intention of starting it up again later. The `value' attributes are not overwritten.

We will sample the MCMC one hundred thousand more times and visualize the progress below:

In [100]:

```
mcmc.sample( 100000 )
```

In [101]:

```
figsize(12.5, 4)
center_trace = mcmc.trace("centers", chain = 1)[:]
prev_center_trace = mcmc.trace("centers", chain = 0)[:]
x =np.arange(50000)
plot(x, prev_center_trace[:,0], label = "previous trace of center 0",
lw = lw, alpha = 0.4 , c= colors[1] )
plot(x, prev_center_trace[:,1], label = "previous trace of center 1",
lw = lw, alpha = 0.4, c = colors[0] )
x = np.arange(50000, 150000)
plot(x, center_trace[:,0], label = "new trace of center 0", lw = lw, c = "#348ABD")
plot(x, center_trace[:,1], label = "new trace of center 1", lw = lw, c = "#A60628")
plt.title( "Traces of unknown center parameters" )
leg = plt.legend(loc = "upper right")
leg.get_frame().set_alpha(0.8)
plt.xlabel( "Steps" );
```

The `trace`

method in the `MCMC`

instance has a keyword argument `chain`

, that indexes which call to `sample`

you would like to be returned. (Often we need to call `sample`

multiple times, and the ability to retrieve past samples is a useful procedure). The default for `chain`

is -1, which will return the samples from the lastest call to `sample`

.

We have not forgotten our main challenge: identify the clusters. We have determined posterior distributions for our unknowns. We plot the posterior distributions of the center and standard deviation variables below:

In [102]:

```
figsize( 11.0, 4 )
std_trace = mcmc.trace("stds")[:]
_i = [ 1,2,3,0]
for i in range(2):
subplot(2,2, _i[ 2*i ] )
plt.title("Posterior of center of cluster %d"%i)
plt.hist( center_trace[:, i], color = colors[i],bins = 30,
histtype="stepfilled" )
subplot(2,2, _i[ 2*i + 1 ])
plt.title( "Posterior of standard deviation of cluster %d"%i )
plt.hist( std_trace[:, i], color = colors[i], bins = 30,
histtype="stepfilled" )
#plt.autoscale(tight=True)
plt.tight_layout()
```

The MCMC algorithm has proposed that the most likely centers of the two clusters are near 120 and 200 respectively. Similar inference can be applied to the standard deviation.

We are also given the posterior distributions for the labels of the data point, which is present in `mcmc.trace("assignment")`

. Below is a visualization of this. The y-axis represents a subsample of the posterior labels for each data point. The x-axis are the sorted values of the data points. A red square is an assignment to cluster 1, and a blue square is an assignment to cluster 0.

In [103]:

```
figsize( 12.5, 4.5 )
cmap = mpl.colors.ListedColormap(colors)
imshow( mcmc.trace("assignment")[::400,np.argsort(data) ],
cmap = cmap ,aspect=.4, alpha = .9 )
xticks(np.arange(0, data.shape[0], 40 ), [ "%.2f"%s for s in np.sort( data )[::40] ] )
plt.ylabel("posterior sample")
plt.xlabel("value of $i$th data point")
plt.title("Posterior labels of data points" );
```

Looking at the above plot, it appears that the most uncertainty is between 150 and 170. The above plot slightly misrepresents things, as the x-axis is not a true scale (it displays the value of the $i$th sorted data point.) A more clear diagram is below, where we have estimated the *frequency* of each data point belonging to the labels 0 and 1.

In [104]:

```
cmap = mpl.colors.LinearSegmentedColormap.from_list("BMH", colors )
assign_trace = mcmc.trace("assignment")[:]
scatter( data,1-assign_trace.mean(axis=0), cmap=cmap,
c=assign_trace.mean(axis=0), s = 50)
plt.ylim( -0.05, 1.05 )
plt.xlim( 35, 300)
plt.title( "Probability of data point belonging to cluster 0" )
plt.ylabel("probability")
plt.xlabel("value of data point" );
```

Even though we modeled the clusters using Normal distributions, we didn't get just a single Normal distribution that *best* fits the data (whatever our definition of best is), but a distribution of values for the Normal's parameters. How can we choose just a single pair of values for the mean and variance and determine a *sorta-best-fit* gaussian?

One quick and dirty way (which has nice theoretical properties we will see in Chapter 5), is to use the *mean* of the posterior distributions. Below we overlay the Normal density functions, using the mean of the posterior distributions as the chosen parameters, with our observed data:

In [106]:

```
norm = stats.norm
x = np.linspace( 20, 300, 500 )
posterior_center_means = center_trace.mean(axis=0)
posterior_std_means = std_trace.mean(axis=0)
posterior_p_mean = mcmc.trace("p")[:].mean()
hist( data, bins = 20, histtype="step", normed = True, color = "k",
lw = 2, label = "histogram of data" )
y = posterior_p_mean*norm.pdf(x, loc=posterior_center_means[0],
scale=posterior_std_means[0] )
plt.plot(x, y, label = "Cluster 0 (using posterior-mean parameters)", lw=3 )
plt.fill_between( x, y, color = colors[1], alpha = 0.3 )
y = (1-posterior_p_mean)*norm.pdf(x, loc=posterior_center_means[1],
scale=posterior_std_means[1] )
plt.plot(x, y, label = "Cluster 1 (using posterior-mean parameters)", lw = 3 )
plt.fill_between( x,y, color = colors[0], alpha = 0.3, )
plt.legend(loc = "upper left")
plt.title( "Visualizing Clusters using posterior-mean parameters" );
```

In the above example, a possible (though less likely) scenario is that cluster 0 has a very large standard deviation, and cluster 1 has a small standard deviation. This would still satisfy the evidence, albeit less so than our original inference. Alternatively, it would be incredibly unlikely for *both* distributions to have a small standard deviation, as the data does not support this hypothesis at all. Thus the two standard deviations are *dependent* on each other: if one is small, the other must be large. In fact, *all* the unknowns are related in a similar manner. For example, if a standard deviation is large, the mean has a wider possible space of realizations. Conversely, a small standard deviation restricts the mean to a small area.

During MCMC, we are returned vectors representing samples from the unknown posteriors. Elements of different vectors cannot be used together, as this would break the above logic: perhaps a sample has returned that cluster 1 has a small standard deviation, hence all the other variables in that sample would incorporate that and be adjusted accordingly. It is easy to avoid this problem though, just make sure you are indexing traces correctly.

Another small example to illustrate the point. Suppose two variables, $x$ and $y$, are related by $x+y=10$. We model $x$ as a Normal random variable with mean 4 and explore 300 samples.

In [2]:

```
import pymc as mc
x = mc.Normal("x", 4, 10 )
y = mc.Lambda( "y", lambda x=x: 10-x, trace = True )
ex_mcmc = mc.MCMC( mc.Model( [x,y] ) )
ex_mcmc.sample( 500 )
plt.plot( ex_mcmc.trace("x")[:] )
plt.plot( ex_mcmc.trace("y")[:] )
plt.title( "Displaying (extreme) case of dependence between unknowns" );
```

As you can see, the two variables are not unrelated, and it would be wrong to add the $i$th sample of $x$ to the $j$th sample of $y$, unless $i = j$.

The above clustering can be generalized to $k$ clusters. Choosing $k=2$ allowed us to visualize the MCMC better, and examine some very interesting plots.

What about prediction? Suppose we observe a new data point, say $x = 175$, and we wish to label it to a cluster. It is foolish to simply assign it to the *closer* cluster center, as this ignores the standard deviation of the clusters, and we have seen from the plots above that this consideration is very important. More formally: we are interested in the *probability* (as we cannot be certain about labels) of assigning $x=175$ to cluster 1. Denote the assignment of $x$ as $L_x$, which is equal to 0 or 1, and we are interested in $P(L_x = 1 \;|\; x = 175 )$.

A naive method to compute this is to re-run the above MCMC with the additional data point appended. The disadvantage with this method is that it will be slow to infer for each novel data point. Alternatively, we can try a *less precise*, but much quicker method.

We will use Bayes Theorem for this. If you recall, Bayes Theorem looks like:

$$ P( A | X ) = \frac{ P( X | A )P(A) }{P(X) }$$

In our case, $A$ represents $L_x = 1$ and $X$ is the evidence we have: we observe that $x = 175$. For a particular sample set of parameters for our posterior distribution, $( \mu_0, \sigma_0, \mu_1, \sigma_1, p)$, we are interested in asking "Is the probability that $x$ is in cluster 1 *greater* than the probability it is in cluster 0?", where the probability is dependent on the chosen parameters.

\begin{align} & P(L_x = 1| x = 175 ) \gt P( (L_x = 0| x = 175 ) \\\\[5pt] & \frac{ P( x=175 | L_x = 1 )P( L_x = 1 ) }{P(x = 175) } \gt \frac{ P( x=175 | L_x = 0 )P( L_x = 0 )}{P(x = 175) } \end{align}

As the denominators are equal, they can be ignored (and good riddance, because computing the quantity $P(x = 175)$ can be difficult).

$$ P( x=175 | L_x = 1 )P( L_x = 1 ) \gt P( x=175 | L_x = 0 )P( L_x = 0 ) $$

In [385]:

```
norm_pdf = stats.norm.pdf
p_trace = mcmc.trace("p")[:]
x = 175
v = p_trace*norm_pdf(x, loc = center_trace[:,0], scale = std_trace[:,0] ) > \
(1-p_trace)*norm_pdf(x, loc = center_trace[:,1], scale = std_trace[:,1] )
print "Probability of belonging to cluster 1:", v.mean()
```

Giving us a probability instead of a label is a very useful thing. Instead of the naive

```
L = 1 if prob > 0.5 else 0
```

we can optimize our guesses using *loss function*, of which the entire fifth chapter is devoted to.

`MAP`

to improve convergence¶If you ran the above example yourself, you may have noticed that our results were not consistent: perhaps your cluster cluster division was more scattered, or perhaps less scattered. The problem is that our traces are a function of the *starting values* of the MCMC algorithm.

It can be mathematically shown that letting the MCMC run long enough, by performing many steps, the algorithm *should forget its initial position*. In fact, this is what it means to say the MCMC converged (in practice though we can never achieve total convergence). Hence if we observe different posterior analysis, it is likely because our MCMC has not fully converged yet, and we should not use samples from it yet (we should use a larger burn-in period ).

In fact, poor starting values can prevent any convergence, or significantly slow it down. Ideally, we would like to have the chain start at the *peak* of our landscape, as this is exactly where the posterior distributions exist. Hence, if we started at the "peak", we could avoid a lengthy burn-in period and incorrect inference. Generally, we call this "peak" the *maximum a posterior* or, more simply, the *MAP*.

Of course, we do not know where the MAP is. PyMC provides an object that will approximate, if not find, the MAP location. In the PyMC main namespace is the `MAP`

object that accepts a PyMC `Model`

instance. Calling `.fit()`

from the `MAP`

instance sets the variables in the model to their MAP values.

```
map_ = mc.MAP( model )
map.fit()
```

The `MAP.fit()`

methods has the flexibility of allowing the user to choose which optimization algorithm to use (after all, this is a optimization problem: we are looking for the values that maximize our landscape), as not all optimization algorithms are created equal. The default optimization algorithm in the call to `fit`

is scipy's `fmin`

algorithm (which attempts to minimize the *negative of the landscape*). An alternative algorithm that is available is Powell's Method, a favourite of PyMC blogger Abraham Flaxman [1], by calling `fit(method='fmin_powell')`

. From my experience, I use the default, but if my convergence is slow or not guaranteed, I experiment with Powell's method.

The MAP can also be used as a solution to the inference problem, as mathematically it is the *most likely* value for the unknowns. But as mentioned earlier in this chapter, this location ignores the uncertainty and doesn't return a distribution.

Typically, it is always a good idea, and rarely a bad idea, to prepend your call to `mcmc`

with a call to `MAP(model).fit()`

. The intermediate call to `fit`

is hardly computationally intensive, and will save you time later due to a shorter burn-in period.

It is still a good idea to provide a burn-in period, even if we are using `MAP`

prior to calling `MCMC.sample`

, just to be safe. We can have PyMC automatically discard the first $n$ samples by specifying the `burn`

parameter in the call to `sample`

. As one does not know when the chain has fully converged, I like to assign the first *half* of my samples to be discarded, sometimes up to 90% of my samples for longer runs. To continue the clustering example from above, my new code would look something like:

```
model = mc.Model( [p, assignment, taus, centers ] )
map_ = mc.MAP( model )
map_.fit() #stores the fitted variables' values in foo.value
mcmc = mc.MCMC( model )
mcmc.sample( 100000, 50000 )
```

Autocorrelation is a measure of how related a series of numbers is with itself. A measurement of 1.0 is perfect positive autocorrelation, 0 no autocorrelation, and -1 is perfect negative correlation. If you are familiar with standard *correlation*, then autocorrelation is just how correlated a series, $x_\tau$, at time $t$ is with the series at time $t-k$:

$$R(k) = Corr( x_t, x_{t-k} ) $$

For example, consider the two series:

$$x_t \sim \text{Normal}(0,1), \;\; x_0 = 0$$ $$y_t \sim \text{Normal}(y_{t-1}, 1 ), \;\; y_0 = 0$$

which has an example paths like:

In [40]:

```
figsize(12.5,4)
import pymc as mc
x_t = mc.rnormal( 0, 1, 200 )
x_t[0] = 0
y_t = np.zeros( 200 )
for i in range(1,200):
y_t[i] = mc.rnormal( y_t[i-1], 1 )
plt.plot( y_t, label= "$y_t$", lw = 3 )
plt.plot( x_t, label = "$x_t$", lw = 3 )
plt.xlabel("time, $t$")
plt.legend();
```