The purpose of this tutorial is to introduce the concepts of Gaussian process regression, and give users an interactive environment for experimenting with Gaussian processes. This tutorial is based on Rasmussen and Williams (2006), but is designed to be more readable and to encourage experimentation!
Let's generate some fake, one dimensional data, which we'll fit with various techniques.
First we need to import matplotlib, numpy, scipy for this notebook:
# Import some packages for displaying results
from IPython.display import display
from ipywidgets import interactive
# Import the standard Python science stack:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
Now let's generate a small dataset of observations $y$, observed at times $x$:
np.random.seed(42)
ndim = 10
# We want our `x` and `y` vectors to be column vectors, so we'll use
# the [:, None] shorthand to turn rows into columns
y = np.random.rand(ndim)[:, None]
y -= y.mean()
x = np.arange(len(y))[:, None]
yerr = (y.std() + 0.1*np.random.randn(len(x))) / 2
# Plot the observations:
plt.figure(figsize=(8, 6))
plt.errorbar(x.ravel(), y.ravel(), yerr, fmt='.', color='k')
plt.xlabel('x')
plt.ylabel('y');
Solutions to the ordinary least squares estimators $\hat{\beta}$ are
$$ \hat{\beta} = ({\bf X}^\top {\bf N}^{-1} {\bf X})^{-1} {\bf X}^\top {\bf N}^{-1} y,$$and their uncertainties are given by
$$ \mathrm{cov} = {\bf X}^\top {\bf N}^{-1} {\bf X}, $$where ${\bf N} = \sigma_n^2 {\bf I}$ is the matrix of variances $\sigma_n^2$ on measurements $y$.
Let's implement this using numpy! Note: the @
operator can be used in python>3.5 to multiply matrices together.
# Append a column of ones next to the `x` values using `np.vander`:
X = np.vander(x.ravel(), 2)
inv_N = np.linalg.inv(np.identity(len(x)) * yerr**2)
# Solve linear regression:
betas = np.linalg.inv(X.T @ inv_N @ X) @ X.T @ inv_N @ y
cov = np.linalg.inv(X.T @ inv_N @ X)
# Compute best-fit line:
best_fit = X @ betas
err = np.sqrt(np.diag(cov))
Plot the result and its uncertainty (here we're plotting just the uncertainty in the intercept and ignoring the uncertainty in the slope, for this example):
plt.figure(figsize=(8, 6))
plt.errorbar(x.ravel(), y.ravel(), yerr, fmt='.', color='k')
plt.plot(x, best_fit)
plt.fill_between(x.ravel(), best_fit.ravel()-err[1], best_fit.ravel()+err[1],
alpha=0.3)
plt.xlabel('x')
plt.ylabel('y');
Gaussian process regression, in this example, is a technique for interpolating between observations or predicting the values of observations at a given time. The linear algebra this time looks a little more complicated, but it's a similar process:
We define a "covariance function" or "kernel function"
$$ {\bf K}({\bf x_1}, {\bf x_2}) = \exp\left(-\frac{(x_1 - x_2)^2}{2 \ell^2}\right) $$A kernel function describes the correlation between neighboring points. To start, we're using "squared exponential" kernel, which equivalent to saying "the correlation between neighboring points is defined by a Gaussian with one parameter: $\ell$." We call the tunable parameters within the kernel function hyperparameters. Don't let the fancy name intimidate you – all hyperparameters do is define the kernel function, which describes how correlated neighboring points are.
The $\ell$ parameter sets the autocorrelation timescale, in other words, how far two points need to be from each other in $x$ (time) before they are no longer correlated. If $\ell$ is large, distant points become more correlated, or in other words, the function becomes smoother. If $\ell$ is small, the function varies more rapidly with $x$.
Let's say that we've taken observations at times ${\bf x}$ and we'd like to predict unobserved values of $y$ at new times ${\bf x_\star}$. ${\bf I}$ is the identity matrix, and $\sigma_n$ is the uncertainty on each measurement y
. The predictive mean is (Equation 2.25 of Rasmussen and Williams (2006))
and the covariance is (Equation 2.26)
$$ \mathrm{cov} = {\bf K}({\bf x_*}, {\bf x_*})- {\bf K(x, x_*)}^\top ({\bf K(x, x)}+ \sigma_n^2 {\bf I})^{-1} {\bf K(x, x_*)}, $$and thus the uncertainty on the $\mu_*$ predictions are given by
$$ \mathrm{err} = \sqrt{\mathrm{diag}\left( \mathrm{cov} \right)}$$where the $\textrm{diag}$ operator extracts the diagonal of a matrix.
We also get the log marginal likelihood by computing:
$$ \ln p({\bf y} | X) = -\frac{1}{2} {\bf y}^\top ({\bf K(x, x)} + \sigma_n^2 {\bf I})^{-1} {\bf y} - \frac{1}{2}\log{ \left|{\bf K(x, x)} + \sigma_n^2 {\bf I}\right|} - \frac{n}{2}\log{2\pi} $$We can implement this in just a few lines of Python:
def square_distance(x1, x2):
"""
Compute the squared distance between two vectors `x1` and `x2`, which
need not have the same shape.
Note that (x1 - x2)^2 = x1^2 + x2^2 - 2 * x1 * x2, so we can use terse
(and rather opaque) numpy syntax implemented here to avoid loops.
Parameters
----------
x1 : `~numpy.ndarray`
Positions with shape (M, 1)
x2 : `~numpy.ndarray`
Positions with shape (N, 1)
Returns
-------
d : `~numpy.ndarray`
Distances between each position with shape (M, N)
"""
## The code after `return` is shorthand for the following routine, but much faster:
# result = np.zeros((len(x1), len(x2)))
# for i in range(len(x1)):
# for j in range(len(x2)):
# result[i, j] = (x1[i] - x2[j])**2
return np.sum(x1**2, axis=1)[:, None] + np.sum(x2**2, axis=1) - 2 * x1 @ x2.T
def sq_exp_kernel(x1, x2, ell=1):
"""
Squared-exponential kernel function
Parameters
----------
x1 : `~numpy.ndarray`
Positions with shape (M, 1)
x2 : `~numpy.ndarray`
Positions with shape (N, 1)
Returns
-------
k : `~numpy.ndarray`
Covariance between `x1` and `x2`
"""
sqdist = square_distance(x1, x2)
return np.exp(-0.5 * sqdist / ell**2)
def gaussian_process_regression(x, y, yerr, xtest, kernel, **kwargs):
"""
Gaussian process regression for column vectors `x` and `y` with
uncertainties `yerr`; predict values at `xtest` using `kernel`.
You may ignore the **kwargs trick we're using here, which will come
in handy later.
Parameters
----------
x : `~numpy.ndarray`
Times of observations (for example)
y : `~numpy.ndarray`
Observations made at each `x`
yerr : `~numpy.ndarray`
Uncertainties on the observations `y`
xtest : `~numpy.ndarray`
New times (for example) with the same units as `x`
kernel : function
Kernel (covariance) function.
Returns
-------
mu : `~numpy.ndarray`
Predictive mean of the Gaussian processat new times `xtest`
cov : `~numpy.ndarray`
Predictive Gaussian process covariance matrix (which can
be used to compute uncertainties on `mu`)
lnlike : float
Log marginal likelihood of the Gaussian process model
"""
K = kernel(x, x, **kwargs) + yerr**2 * np.identity(len(x))
K_s = kernel(x, xtest, **kwargs)
K_ss = kernel(xtest, xtest, **kwargs)
# Take the inverse only once, since this is the most expensive computation:
inv_K = np.linalg.inv(K)
# The `@` operator in python 3 is the matrix multiplication operator
mu = K_s.T @ inv_K @ y
cov = K_ss - K_s.T @ inv_K @ K_s
# Compute the log likelihood
lnlike = -0.5 * y.T @ inv_K @ y - 0.5 * np.log(np.linalg.det(K)) - len(x)/2 * np.log(2*np.pi)
return mu, cov, lnlike.ravel()
Let's call our Gaussian process regression method on our data, interpolating between the observations:
N = 100
# Interpolate `N` observations from the minimum to maximum values of `x`
xtest = np.linspace(x.min(), x.max(), N)[:, None]
def gp_interact(error_scale):
mu, cov, lnlike = gaussian_process_regression(x, y, yerr*error_scale, xtest, sq_exp_kernel)
err = np.sqrt(np.diagonal(cov))
plt.figure(figsize=(8, 6))
plt.errorbar(x.ravel(), y.ravel(), yerr*error_scale, fmt='.', color='k')
plt.plot(xtest.ravel(), mu.ravel(), label='GP mean')
plt.fill_between(xtest.ravel(), mu.ravel()-err, mu.ravel()+err,
alpha=0.3, label='GP uncertainty')
plt.legend(loc='upper center')
plt.xlabel('x')
plt.ylabel('y');
print('Vary the uncertainty with the scaling parameter "error_scale", gets multplied by the error in y:')
interactive_plot = interactive(gp_interact, error_scale=(0.1, 1, 0.1))
output = interactive_plot.children[-1]
interactive_plot
In the figure above, the blue line shows the predicted value of $y$ for values of $x$ where there were no observations – that's why we say Gaussian process regression is a form of interpolation. It's essentially interpolating between points, where the smoothness of the interpolation is set by the kernel function.
But Gaussian process regression is more powerful than just predicting the value of $y$ for arbitrary $x$, it also computes the uncertainty of the prediction for arbitrary $x$. The blue region in the figure above shows the uncertainty on the predicted value of $y$ at each $x$. Note that the uncertainty is roughly equivalent to the width of the error bars where there are observations, and the uncertainty gets larger in between observations. You can change the uncertainties on each point and see its effect on the uncertainty in the predictive mean by sliding the error_scale
slider above the plot. As your observations shift towards higher precision (smaller uncetainties), so does the Gaussian process model.
Gaussian process regression is often referred to as a form of "non-parametric" parameter estimation, which is a way of saying that you don't define the functional form of the mean, $\mu$. $\mu$ is not a polynomial function, it is not a trigonometric function, it is a highly adaptive interpolation function whose functional form is in fact defined by its covariance matrix (${\bf K}$), rather than being some function $\mu = f(x)$ (which means typically it's not easy to write the functional form of the GP in terms of $x$).
Let's fit the data with an interactive kernel, which allows you to vary the $\ell$ hyperparameter and see the results:
def gp_interact(ell):
"""
The contents of this function will be interactive!
"""
def sq_exp_kernel_interactive(x1, x2=None, ell=ell):
"""
Interactive Gaussian Kernel function
"""
if x2 is not None:
sqdist = square_distance(x1, x2)
else:
sqdist = x1**2
return np.exp(-0.5 * sqdist / ell**2)
mu, cov, lnlike = gaussian_process_regression(x, y, yerr, xtest, sq_exp_kernel_interactive)
err = np.sqrt(np.diag(cov))
fig, ax = plt.subplots(1, 3, figsize=(14, 4))
ax[0].errorbar(x.ravel(), y.ravel(), yerr, fmt='.', color='k')
ax[0].plot(xtest.ravel(), mu.ravel(), label='GP Mean')
ax[0].fill_between(xtest.ravel(), mu.ravel()-err, mu.ravel()+err,
alpha=0.2, label='GP Uncertainty')
ax[0].set(xlabel='$x$', ylabel='$y$', title='GP Regression')
ax[0].legend(loc='upper center')
ax[1].set(xlabel='$x_1$', ylabel='$x_2$', title='Covariance matrix')
ax[1].imshow(sq_exp_kernel_interactive(x, x))
ax[2].plot(xtest, sq_exp_kernel_interactive(xtest))
ax[2].set(xlabel='Lags', ylabel='Covariance', title='Autocorrelation Function')
fig.tight_layout()
plt.show()
print('Vary the hyperparameter "ell", which defines the autocorrelation timescale:')
interactive_plot = interactive(gp_interact, ell=(1, 10, 1))
output = interactive_plot.children[-1]
interactive_plot
In the figure above, the left plot shows the mean model (blue) with its uncertainty (blue shaded region), and the data (black circles).
The middle plot shows the covariance matrix – note that it's got most of its weight along the diagonal, which is a mathematical representation of saying "observations close in $x$ to one another are more correlated than observations far apart in $x$". The brightness of each pixel corresponds to the strength of the correlation between the two inputs $x_1$ and $x_2$.
Alternatively, a simpler way to visualize the behavior of your kernel function is to plot the kernel function directly, which is shown on the right. Note how distant points (large lags) have stronger autocorrelation power when ell
is large.
The exponential-cosine kernel is a cosine function multiplied by a decaying exponential envelope. It is good at fitting quasi-periodic signals (like stellar photometry, for example).
def gp_interact(ell, period):
"""
The contents of this function will be interactive!
"""
def exp_cos(x1, x2=None, sigma=1, ell=ell, period=period):
"""
Exponentially-decaying cosine function
"""
if x2 is not None:
sqdist = square_distance(x1, x2)
else:
sqdist = x1**2
return np.exp(-0.5 * sqdist / ell**2) * np.cos(2*np.pi*np.sqrt(sqdist) / period)
mu, cov, lnlike = gaussian_process_regression(x, y, yerr, xtest, exp_cos)
err = np.sqrt(np.diag(cov))
fig, ax = plt.subplots(1, 3, figsize=(14, 4))
ax[0].errorbar(x.ravel(), y.ravel(), yerr, fmt='.', color='k')
ax[0].plot(xtest.ravel(), mu.ravel(), label='GP Mean')
ax[0].fill_between(xtest.ravel(), mu.ravel()-err, mu.ravel()+err,
alpha=0.2, label='GP Uncertainty')
ax[0].set(xlabel='$x$', ylabel='$y$', title='GP Regression')
ax[0].legend(loc='upper center')
ax[1].set(xlabel='$x_1$', ylabel='$x_2$', title='Covariance matrix')
ax[1].imshow(exp_cos(x, x))
ax[2].plot(xtest, exp_cos(xtest))
ax[2].set(xlabel='Lags', ylabel='Covariance',
title='Autocorrelation Function')
fig.tight_layout()
plt.show()
print('Vary the hyperparameters "ell" and "period", which define the autocorrelation timescale:')
interactive_plot = interactive(gp_interact, ell=(1, 10, 1), period=(1, 10, 1))
output = interactive_plot.children[-1]
interactive_plot
It's easiest to see what this choice of kernel function (exp-cos) choice looks like by watching the autocorrelation plot above as you change the parameters.
Now you may be asking yourself, "wow this is great, I'd like to apply this technique to a huge dataset. How does the GP scale with the size of the dataset?" The answer is a bit scary – the computation expense in the general case scales as $\mathcal{O}(N^3)$. The most computationally expensive step in the process here is the matrix inversion. We're using numpy
's standard matrix inversion function, which should work on any matrix.
If you want a faster implementation of your Gaussian process regression, you can use a package like george
or celerite
which implement faster matrix inversions by taking advantage of special properties of the covariance matrices of certain kernel functions.
Gaussian processes are not just a means of marginalizing over noise in observations, they can also be used to measure physical quantities. Say for example that you are measuring the flux $f$ of a rotating star as a function of time $t$:
true_period = 3.5 # days
np.random.seed(42)
npoints = 50
t = np.linspace(0, 2*true_period, npoints)[:, None]
# Generate a sinusoidal signal
sinusoid = np.sin(2*np.pi*t/true_period)
# Assing non-uniform uncertainties to each point
ferr = 0.2 + 0.05 * np.random.randn(len(t))
# Add correlated noise to the sinusoidal signal
cov = np.diag((2*ferr)**2) + 0.5 * np.exp(-0.5 * square_distance(t, t) / 2)
f = np.random.multivariate_normal(sinusoid.ravel(), cov)[:, None]
t_test = np.linspace(t.min(), t.max() + 5, 100)[:, None]
plt.errorbar(t.ravel(), f.ravel(), ferr, color='k', fmt='.')
plt.xlabel('Time [days]')
plt.ylabel('Flux');
There's a lot of noise in these observations, but there's also a signal – a periodic signal of rotation from the star. Let's fit these data with a cosine kernel, and extrapolate our results into the future:
def cos(x1, x2=None, sigma=1, period=None):
"""
Cosine covariance function
"""
if x2 is not None:
sqdist = square_distance(x1, x2)
else:
sqdist = x1**2
return np.cos(2*np.pi*np.sqrt(sqdist) / period)
def gp_interact(period):
"""
The contents of this function will be interactive!
"""
mu, cov, lnlike = gaussian_process_regression(t, f, ferr, t, cos, period=period)
mu, cov, _ = gaussian_process_regression(t, f, ferr, t_test, cos, period=period)
err = np.sqrt(np.diag(cov))
fig, ax = plt.subplots(1, 3, figsize=(14, 4))
ax[0].errorbar(t.ravel(), f.ravel(), ferr, fmt='.', color='k')
ax[0].plot(t_test.ravel(), mu.ravel(), label='GP Mean')
ax[0].fill_between(t_test.ravel(), mu.ravel()-err, mu.ravel()+err, alpha=0.2, label='GP Uncertainty')
ax[0].set(xlabel='Time [days]', ylabel='Flux', title='GP Regression')
ax[0].legend(loc='upper center')
ax[1].set(xlabel='$x_1$', ylabel='$x_2$', title='Covariance matrix')
ax[1].imshow(cos(t, t, period=period))
ax[2].plot(xtest, cos(xtest, period=period))
ax[2].set(xlabel='Lags', ylabel='Covariance', title='Autocorrelation Function')
fig.tight_layout()
plt.show()
display("Log-likelihood: {0}".format(lnlike))
print('Vary the hyperparameters "ell", and "period" which define the autocorrelation timescale:')
interactive_plot = interactive(gp_interact, ell=(1, 10, 1), period=(1, 10, 0.5))
output = interactive_plot.children[-1]
interactive_plot
Now vary the period
parameter using the slider – what parameter gives you the best fit to the data? Does it match the input period (true_period
)? You can refer to the log-likelihood, which is printed at the bottom of the visualizatiaon. Log-likelihood is just the opposite of $\chi^2$, you want to maximize the log-likelihood to find the best-fit parameters.
We can use an optimizer to fit for the best period:
def chi2(p):
period = p
# Note: x and xtest are the same here since we want to evaluate the
# Gaussian process at the same x values as our observations
mu, cov, lnlike = gaussian_process_regression(t, f, ferr, t, cos, period=period)
return -2 * lnlike
# Initial guesses for [period]:
init_parameters = [3.]
# Place uniform priors (boundaries) on the values of [ell, period]
# that are explored by the minimizer:
bounds = [[2, 10]]
# Use `minimize` to minimize the chi^2 using the L-BFGS-B method
solution = minimize(chi2, init_parameters, method='L-BFGS-B', bounds=bounds)
best_period = solution.x[0]
mu, cov, lnlike = gaussian_process_regression(t, f, ferr, t, cos, period=best_period)
# Print the results:
print("Maximum-likelihood period: {0:.2f} days".format(best_period))
If the value above is close to the input period, good job, you've just used a quasi-periodic kernel function to find the period lurking in noisy data!
You must be careful to set a reasonable prior (especially a lower limit) on the timescale parameter ell
for your GP – if you don't, an optimizer will likely tweak your hyperparameters such that the timescale parameter ell
is small, allowing high frequency variations in the time domain which fit every peak and trough in your data.
A good place to start is to prevent the timescale parameters from being similar to or smaller than the space between observations. That's what we did in the example above: notice the bounds
parameter that we gave to the minimize
function: we only allowed the timescale parameter ell
to be twice the separation between points or greater. The best-fit result has a timescale parameter that is equivalent to the lower limit, signaling to us that it was important that we put that limit in place.
You should try whenever possible to choose a kernel that aligns with the source of your noise. For example, if your noise is smoothly varying, a squared-exponential kernel might do the trick. If your data is more rapidly varying, you may find a better fit with a Matern kernel. If you know that your data have a quasi-periodic trend (like rotating stars), fit with an exponential-cosine kernel. See Chapter 4 of Rasmussen and Williams for the full range of possibilities.
Now you may be asking, "why bother fitting a Gaussian process to the observations above, when we know the signal is sinusoidal"? If you know the functional form of the signal you're trying to fit, then you should use it! In Gaussian process terminology we'd call that "fitting for the mean model", in other words, the model which you can subtract from your data in order to create a time series of observations with zero mean is the mean model.
Let's fit a sinusoidal model to the data:
def sine_model(p):
amplitude, period = p
return amplitude * np.sin(2*np.pi*t / period)
def chi2(p):
return np.sum((sine_model(p) - f)**2 / ferr**2)
# Initial guesses for [amplitude, period]:
init_parameters = [1.1, 3.2]
# Place uniform priors (boundaries) on the values of [amp, period]
# that are explored by the minimizer:
bounds = [[0.5, 2], [1, 10]]
# Use `minimize` to minimize the chi^2 using the L-BFGS-B method
solution = minimize(chi2, init_parameters, method='L-BFGS-B', bounds=bounds)
best_amp, best_period = solution.x
# Print the results:
print("Maximum-likelihood period: {0:.2f} days".format(best_period))
# Plot the result:
plt.errorbar(t.ravel(), f.ravel(), ferr, color='k', fmt='.')
plt.plot(t, sine_model(solution.x))
plt.xlabel('Time [days]')
plt.ylabel('Flux');
The minimizer is finding a best-fit solution that's close to the true solution, but the fit isn't so great. Let's plot the residuals (data - model
) with reasonable solutions for the amplitude and period of the sinusoidal mean model:
plt.errorbar(t.ravel(), (f - sine_model([1, true_period])).ravel(), ferr, color='k', fmt='.')
plt.xlabel('Time [days]')
plt.ylabel('Residuals');
Note that the residuals above do not look like white noise – they're correlated in time. This is exactly when to use a Gaussian process: when you want to fit a model but there is correlated noise in your data.
Now let's fit the model simultaneously a Gaussian process with a squared-exponential to pick up that correlated noise:
def sine_model_gp(p):
amplitude, period, ell = p
return amplitude * np.sin(2*np.pi*t / period)
def gp_model(p):
amplitude, period, ell = p
mean_model = sine_model_gp(p)
# Evaluate the GP at times `t` so we can compute residuals
mu, cov, lnlike = gaussian_process_regression(t, f - mean_model, ferr, t,
sq_exp_kernel, ell=ell)
residuals = f - mean_model - mu
return mu, cov, lnlike, residuals
def chi2(p):
mu, cov, lnlike, residuals = gp_model(p)
return -2 * lnlike
# Initial guesses for [amplitude, period, ell]:
init_parameters = [1.1, 3.5, 2]
# Place uniform priors (boundaries) on the values of [amp, period, ell]
# that are explored by the minimizer:
bounds = [[0.5, 2], [3, 10], [2, 10]]
# Use `minimize` to minimize the chi^2 using the L-BFGS-B method
solution = minimize(chi2, init_parameters, method='L-BFGS-B', bounds=bounds)
best_amp, best_period, best_ell = solution.x
# Print the results:
print("Maximum-likelihood period: {0:.2f} days".format(best_period))
print("Maximum-likelihood ell: {0:.2f} days".format(best_ell))
# Plot the result:
mu, cov, lnlike, residuals = gp_model(solution.x)
sigma = np.sqrt(np.diag(cov))
# First plot the data and the mean model
fig, ax = plt.subplots(3, 1, figsize=(6, 10), sharex=True)
ax[0].errorbar(t.ravel(), f.ravel(), ferr, color='k', fmt='.')
ax[0].plot(t, sine_model_gp(solution.x), label='mean model solution')
ax[0].plot(t, sine_model_gp(solution.x) + mu, color='C1', label='GP mean')
ax[0].fill_between(t.ravel(), sine_model_gp(solution.x).ravel() + mu.ravel() - sigma.ravel(),
sine_model_gp(solution.x).ravel() + mu.ravel() + sigma.ravel(),
color='C1', label='GP uncertainty', alpha=0.3)
ax[0].set(ylabel='Flux')
ax[0].legend()
# Next plot the data minus the mean model, and the best-fit GP
ax[1].errorbar(t.ravel(), np.ravel(f - sine_model_gp(solution.x)), ferr, color='k', fmt='.')
ax[1].plot(t.ravel(), mu.ravel(), color='C1', label='GP mean')
ax[1].fill_between(t.ravel(), mu.ravel() - sigma.ravel(),
mu.ravel() + sigma.ravel(),
color='C1', label='GP uncertainty', alpha=0.3)
ax[1].set(ylabel='Flux - mean model')
ax[1].legend()
# Lastly, plot the residuals after the GP and mean model are removed
ax[2].errorbar(t.ravel(), residuals.ravel(), ferr, color='k', fmt='.')
ax[2].set(xlabel='Time [days]', ylabel='Flux - mean model - GP')
fig.tight_layout()
plt.show()
In the example above we are fitting for the Gaussian process hyperparameter ell
at the same time as we fit for the sinusoidal model parameters: amplitude and period. The periodic component is being fit by the periodic mean function that we chose (a sine function), and the non-periodic, correlated noise present in our data was fit by the Gaussian process using a smooth squared-exponential kernel.
The uppermost plot shows the best-fit sinusoidal model (blue) which gets close the correct period. It's clear though that the fit to the data isn't very good. The middle plot shows the observations subtracted by the best-fit mean model, which clearly show the correlated noise we discovered earlier. The middle plot also shows the Gaussian process model (orange), which smoothly soaks up the correlated noise. The bottom plot shows the residuals after the mean model and the Gaussian process model have been subtracted off, leaving you with less correlated, nearly "white" noise.
A Gaussian process (GP) is a non-parametric approach to modeling noisy data
GPs are useful...
when you don't know the explicit functional form of the correlated noise in your data
for directly measuring physical parameters in certain circumstances (e.g.: fits for a period in quasi-periodic observations)
for modeling correlated noise that's present when you know the functional form of your signal (e.g.: fits with a mean model)
GPs will overfit your data if you're not careful to put reasonable priors on your kernel hyperparameters
GPs are slow in general, so you should take advantage of packages like celerite
and george
for fast matrix inversion/decomposition tricks to speed things up
This tutorial only covers GPs in one dimension, though they are extensible to N dimensions (see for example: GPyTorch)
For a the ultimate reference on GPs, we encourage you to read Rasmussen and Williams (2006), which is available for free!