A minimal reproducable example of Robust Regression with Outlier Detection using Hogg 2010 Signal vs Noise method.
Note:
$> theano-cache clear
and rerunning the notebook.Package Requirements (shown as a conda-env YAML):
$> less conda_env_pymc3_examples.yml
name: pymc3_examples
channels:
- defaults
dependencies:
- python=3.4
- ipython
- ipython-notebook
- ipython-qtconsole
- numpy
- scipy
- matplotlib
- pandas
- seaborn
- patsy
- pip
$> conda env create --file conda_env_pymc3_examples.yml
$> source activate pymc3_examples
$> pip install --process-dependency-links git+https://github.com/pymc-devs/pymc3
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import optimize
import pymc3 as pm
import theano
import theano.tensor as tt
# configure some basic options
sns.set(style="darkgrid", palette="muted")
pd.set_option('display.notebook_repr_html', True)
plt.rcParams['figure.figsize'] = 12, 8
np.random.seed(0)
print('Running on PyMC3 v{}'.format(pm.__version__))
Running on PyMC3 v3.6
We'll use the Hogg 2010 data available at https://github.com/astroML/astroML/blob/master/astroML/datasets/hogg2010test.py
It's a very small dataset so for convenience, it's hardcoded below
#### cut & pasted directly from the fetch_hogg2010test() function
## identical to the original dataset as hardcoded in the Hogg 2010 paper
dfhogg = pd.DataFrame(np.array([[1, 201, 592, 61, 9, -0.84],
[2, 244, 401, 25, 4, 0.31],
[3, 47, 583, 38, 11, 0.64],
[4, 287, 402, 15, 7, -0.27],
[5, 203, 495, 21, 5, -0.33],
[6, 58, 173, 15, 9, 0.67],
[7, 210, 479, 27, 4, -0.02],
[8, 202, 504, 14, 4, -0.05],
[9, 198, 510, 30, 11, -0.84],
[10, 158, 416, 16, 7, -0.69],
[11, 165, 393, 14, 5, 0.30],
[12, 201, 442, 25, 5, -0.46],
[13, 157, 317, 52, 5, -0.03],
[14, 131, 311, 16, 6, 0.50],
[15, 166, 400, 34, 6, 0.73],
[16, 160, 337, 31, 5, -0.52],
[17, 186, 423, 42, 9, 0.90],
[18, 125, 334, 26, 8, 0.40],
[19, 218, 533, 16, 6, -0.78],
[20, 146, 344, 22, 5, -0.56]]),
columns=['id','x','y','sigma_y','sigma_x','rho_xy'])
## for convenience zero-base the 'id' and use as index
dfhogg['id'] = dfhogg['id'] - 1
dfhogg.set_index('id', inplace=True)
## standardize (mean center and divide by 1 sd)
dfhoggs = (dfhogg[['x','y']] - dfhogg[['x','y']].mean(0)) / dfhogg[['x','y']].std(0)
dfhoggs['sigma_y'] = dfhogg['sigma_y'] / dfhogg['y'].std(0)
dfhoggs['sigma_x'] = dfhogg['sigma_x'] / dfhogg['x'].std(0)
## create xlims ylims for plotting
xlims = (dfhoggs['x'].min() - np.ptp(dfhoggs['x'])/5
,dfhoggs['x'].max() + np.ptp(dfhoggs['x'])/5)
ylims = (dfhoggs['y'].min() - np.ptp(dfhoggs['y'])/5
,dfhoggs['y'].max() + np.ptp(dfhoggs['y'])/5)
## scatterplot the standardized data
g = sns.FacetGrid(dfhoggs, height=8)
_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker="o", ls='')
_ = g.axes[0][0].set_ylim(ylims)
_ = g.axes[0][0].set_xlim(xlims)
plt.subplots_adjust(top=0.92)
_ = g.fig.suptitle('Scatterplot of Hogg 2010 dataset after standardization', fontsize=16)
Observe:
The linear model is really simple and conventional:
$$\bf{y} = \beta^{T} \bf{X} + \bf{\sigma}$$where:
$\beta$ = coefs = $\{1, \beta_{j \in X_{j}}\}$
$\sigma$ = the measured error in $y$ in the dataset sigma_y
NOTE:
with pm.Model() as mdl_ols:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sigma=1)
b1 = pm.Normal('b1_slope', mu=0, sigma=1)
## Define linear model
yest = b0 + b1 * dfhoggs['x']
## Use y error from dataset, convert into theano variable
sigma_y = theano.shared(np.asarray(dfhoggs['sigma_y'],
dtype=theano.config.floatX), name='sigma_y')
## Define Normal likelihood
likelihood = pm.Normal('likelihood', mu=yest, sigma=sigma_y, observed=dfhoggs['y'])
with mdl_ols:
## take samples
traces_ols = pm.sample()
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (2 chains in 2 jobs) NUTS: [b1_slope, b0_intercept] Sampling 2 chains: 100%|██████████| 2000/2000 [00:00<00:00, 2023.86draws/s]
_ = pm.traceplot(traces_ols,
lines=tuple([(k, {}, v['mean'])
for k, v in pm.summary(traces_ols).iterrows()]))
NOTE: We'll illustrate this OLS fit and compare to the datapoints in the final plot
I've added this brief section in order to directly compare the Student-T based method exampled in Thomas Wiecki's notebook.
Instead of using a Normal distribution for the likelihood, we use a Student-T, which has fatter tails. In theory this allows outliers to have a smaller mean square error in the likelihood, and thus have less influence on the regression estimation. This method does not produce inlier / outlier flags but is simpler and faster to run than the Signal Vs Noise model below, so a comparison seems worthwhile.
Note: we'll constrain the Student-T 'degrees of freedom' parameter nu
to be an integer, but otherwise leave it as just another stochastic to be inferred: no need for prior knowledge.
with pm.Model() as mdl_studentt:
## Define weakly informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sigma=1)
b1 = pm.Normal('b1_slope', mu=0, sigma=1)
## Define linear model
yest = b0 + b1 * dfhoggs['x']
## Use y error from dataset, convert into theano variable
sigma_y = theano.shared(np.asarray(dfhoggs['sigma_y'],
dtype=theano.config.floatX), name='sigma_y')
## define prior for Student T degrees of freedom
nu = pm.Uniform('nu', lower=1, upper=100)
## Define Student T likelihood
likelihood = pm.StudentT('likelihood', mu=yest, sigma=sigma_y, nu=nu,
observed=dfhoggs['y'])
with mdl_studentt:
## take samples
traces_studentt = pm.sample(tune=2000)
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (2 chains in 2 jobs) NUTS: [nu, b1_slope, b0_intercept] Sampling 2 chains: 100%|██████████| 5000/5000 [00:03<00:00, 1324.87draws/s] The acceptance probability does not match the target. It is 0.6456398252635402, but should be close to 0.8. Try to increase the number of tuning steps.
_ = pm.traceplot(traces_studentt,
lines=tuple([(k, {}, v['mean'])
for k, v in pm.summary(traces_studentt).iterrows()]))
Observe:
b0
and b1
show quite a skew to the right, possibly this is the action of a few samples regressing closer to the OLS estimate which is towards the leftnu
parameter seems very happy to stick at nu = 1
, indicating that a fat-tailed Student-T likelihood has a better fit than a thin-tailed (Normal-like) Student-T likelihood.NOTE: We'll illustrate this Student-T fit and compare to the datapoints in the final plot
Please read the paper (Hogg 2010) and Jake Vanderplas' code for more complete information about the modelling technique.
The general idea is to create a 'mixture' model whereby datapoints can be described by either the linear model (inliers) or a modified linear model with different mean and larger variance (outliers).
The likelihood is evaluated over a mixture of two likelihoods, one for 'inliers', one for 'outliers'. A Bernouilli distribution is used to randomly assign datapoints in N to either the inlier or outlier groups, and we sample the model as usual to infer robust model parameters and inlier / outlier flags:
$$ \mathcal{logL} = \sum_{i}^{i=N} log \left[ \frac{(1 - B_{i})}{\sqrt{2 \pi \sigma_{in}^{2}}} exp \left( - \frac{(x_{i} - \mu_{in})^{2}}{2\sigma_{in}^{2}} \right) \right] + \sum_{i}^{i=N} log \left[ \frac{B_{i}}{\sqrt{2 \pi (\sigma_{in}^{2} + \sigma_{out}^{2})}} exp \left( - \frac{(x_{i}- \mu_{out})^{2}}{2(\sigma_{in}^{2} + \sigma_{out}^{2})} \right) \right] $$where:
$\bf{B}$ is Bernoulli-distibuted $B_{i} \in [0_{(inlier)},1_{(outlier)}]$
Note: A previous version of this example implemented the above likelihood directly in theano. However, we can implement it more efficiently using the Normal logp from PyMC3 and a Potential
.
with pm.Model() as mdl_signoise:
## Define informative Normal priors to give Ridge regression
b0 = pm.Normal('b0_intercept', mu=0, sigma=1, testval=pm.floatX(0.1))
b1 = pm.Normal('b1_slope', mu=0, sigma=1, testval=pm.floatX(1.))
## Define linear model
yest_in = b0 + b1 * dfhoggs['x']
## Define weakly informative priors for the mean and variance of outliers
yest_out = pm.Normal('yest_out', mu=0, sigma=10, testval=pm.floatX(1.))
sigma_y_out = pm.HalfNormal('sigma_y_out', sigma=5, testval=pm.floatX(1.))
## Define Bernoulli inlier / outlier flags according to a hyperprior
## fraction of outliers, itself constrained to [0, .5] for symmetry
frac_outliers = pm.Uniform('frac_outliers', lower=0.0, upper=.5)
is_outlier = pm.Bernoulli('is_outlier', p=frac_outliers, shape=dfhoggs.shape[0],
testval=np.random.rand(dfhoggs.shape[0]) < 0.2)
## Extract observed y and sigma_y from dataset, encode as theano objects
yobs = theano.shared(np.asarray(dfhoggs['y'], dtype=theano.config.floatX))
sigma_y_in = np.asarray(dfhoggs['sigma_y'], dtype=theano.config.floatX)
# Set up normal distributions that give us the logp for both distributions
inliers = pm.Normal.dist(mu=yest_in, sigma=sigma_y_in).logp(yobs)
outliers = pm.Normal.dist(mu=yest_out, sigma=sigma_y_in + sigma_y_out).logp(yobs)
# Build custom likelihood, a potential will just be added to the logp and can thus function
# like a likelihood that we would add with the observed kwarg.
pm.Potential('obs', ((1 - is_outlier) * inliers).sum() + (is_outlier * outliers).sum())
with mdl_signoise:
traces_signoise = pm.sample(tune=5000)
Multiprocess sampling (2 chains in 2 jobs) CompoundStep >NUTS: [frac_outliers, sigma_y_out, yest_out, b1_slope, b0_intercept] >BinaryGibbsMetropolis: [is_outlier] Sampling 2 chains: 100%|██████████| 11000/11000 [00:20<00:00, 529.35draws/s] There were 11 divergences after tuning. Increase `target_accept` or reparameterize. The acceptance probability does not match the target. It is 0.89434265257995, but should be close to 0.8. Try to increase the number of tuning steps. The number of effective samples is smaller than 25% for some parameters.
There are some divergences, and because we explicitly modeling the latent label (outliner or not) the sampler could have problem. Rewritting this model into a marginal mixture model would be better.
varnames = ['b0_intercept', 'b1_slope', 'yest_out', 'sigma_y_out', 'frac_outliers']
_ = pm.traceplot(traces_signoise, var_names=varnames,
lines=tuple([(k, {}, v['mean'])
for k, v in pm.summary(traces_signoise, varnames=varnames).iterrows()]))
NOTE:
At each step of the traces, each datapoint may be either an inlier or outlier. We hope that the datapoints spend an unequal time being one state or the other, so let's take a look at the simple count of states for each of the 20 datapoints.
outlier_melt = pd.melt(pd.DataFrame(traces_signoise['is_outlier', -1000:],
columns=['[{}]'.format(int(d)) for d in dfhoggs.index]),
var_name='datapoint_id', value_name='is_outlier')
ax0 = sns.pointplot(y='datapoint_id', x='is_outlier', data=outlier_melt,
kind='point', join=False, ci=None, size=4, aspect=2)
_ = ax0.vlines([0,1], 0, 19, ['b','r'], '--')
_ = ax0.set_xlim((-0.1,1.1))
_ = ax0.set_xticks(np.arange(0, 1.1, 0.1))
_ = ax0.set_xticklabels(['{:.0%}'.format(t) for t in np.arange(0,1.1,0.1)])
_ = ax0.yaxis.grid(True, linestyle='-', which='major', color='w', alpha=0.4)
_ = ax0.set_title('Prop. of the trace where datapoint is an outlier')
_ = ax0.set_xlabel('Prop. of the trace where is_outlier == 1')
Observe:
frac_outliers
is ~0.35, corresponding to roughly 7 of the 20 datapoints. You can see these 7 datapoints in the plot above, all those with a value >50% or thereabouts.The 95% cutoff we choose is subjective and arbitrary, but I prefer it for now, so let's declare these 3 to be outliers and see how it looks compared to Jake Vanderplas' outliers, which were declared in a slightly different way as points with means above 0.68.
Note:
cutoff = 5
dfhoggs['outlier'] = np.percentile(traces_signoise['is_outlier'], cutoff, axis=0)
dfhoggs['outlier'].value_counts()
0.0 17 1.0 3 Name: outlier, dtype: int64
from matplotlib.lines import Line2D
g = sns.FacetGrid(dfhoggs, height=8, hue='outlier', hue_order=[True,False],
palette='Set1', legend_out=False)
lm = lambda x, samp: samp['b0_intercept'] + samp['b1_slope'] * x
pm.plot_posterior_predictive_glm(traces_ols,
eval=np.linspace(-3, 3, 10), lm=lm, samples=200, color='#22CC00', alpha=.2)
pm.plot_posterior_predictive_glm(traces_studentt, lm=lm,
eval=np.linspace(-3, 3, 10), samples=200, color='#FFA500', alpha=.5)
pm.plot_posterior_predictive_glm(traces_signoise, lm=lm,
eval=np.linspace(-3, 3, 10), samples=200, color='#357EC7', alpha=.3)
ols_line = Line2D([0], [0], color='#22CC00')
studentt_line = Line2D([0], [0], color='#FFA500')
hogg_line = Line2D([0], [0], color='#357EC7')
line_legend = plt.legend([ols_line, studentt_line, hogg_line], ['OLS', 'Student-T', 'Hogg'], loc=4)
plt.gca().add_artist(line_legend)
_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker="o", ls='').add_legend()
_ = g.axes[0][0].set_ylim(ylims)
_ = g.axes[0][0].set_xlim(xlims)
Observe:
The posterior preditive fit for:
We see that the Robust Signal vs Noise model also yields specific estimates of which datapoints are outliers:
Overall, it seems that:
Example originally contributed by Jonathan Sedar 2015-12-21 github.com/jonsedar. Updated by Thomas Wiecki 2018-7-24.