import numpy as np
from scipy import linalg
import pylab as pl
from sklearn import mixture
%pylab inline --no-import-all
Populating the interactive namespace from numpy and matplotlib
Both models have access to five components with which to fit the data. Note that the EM model will necessarily use all five components while the DP model will effectively only use as many as are needed for a good fit. This is a property of the Dirichlet Process prior. Here we can see that the EM model splits some components arbitrarily, because it is trying to fit too many components, while the Dirichlet Process model adapts it number of state automatically.
This example doesn’t show it, as we’re in a low-dimensional space, but another advantage of the dirichlet process model is that it can fit full covariance matrices effectively even when there are less examples per cluster than there are dimensions in the data, due to regularization properties of the inference algorithm.
## generate artificial data
n_samples = 500
np.random.seed(0)
centers = np.array([[0., -0.1], [1.7, .4]])
X = np.r_[np.dot(np.random.randn(n_samples, 2), centers),
.7 * np.random.randn(n_samples, 2) + np.array([-6, 3])]
pl.plot(X[:, 0], X[:, 1], '.')
[<matplotlib.lines.Line2D at 0x43d7550>]
## fit a GMM with EM using 5 components
gmm = mixture.GMM(n_components=5, covariance_type='full')
%time gmm.fit(X)
y = gmm.predict(X)
for label in np.unique(y):
pl.plot(X[y == label, 0], X[y == label, 1], '.')
CPU times: user 176 ms, sys: 196 ms, total: 372 ms Wall time: 186 ms
## fit a dirichlet process mixture of gaussians using 5 components
dpgmm = mixture.DPGMM(n_components=5, covariance_type='full')
%time dpgmm.fit(X)
y = dpgmm.predict(X)
for label in np.unique(y):
pl.plot(X[y == label, 0], X[y == label, 1], '.')
CPU times: user 160 ms, sys: 144 ms, total: 304 ms Wall time: 165 ms