This shows some basic usage of the natural gradient optimizer, both on its own and in combination with Adam optimizer.
import warnings
import numpy as np
import gpflow
from gpflow.test_util import notebook_niter, notebook_range
from gpflow.models import VGP, GPR, SGPR, SVGP
from gpflow.training import NatGradOptimizer, AdamOptimizer, XiSqrtMeanVar
%matplotlib inline
%precision 4
warnings.filterwarnings('ignore')
np.random.seed(0)
N, D = 100, 2
# inducing points
M = 10
X = np.random.uniform(size=(N, D))
Y = np.sin(10 * X)
Z = np.random.uniform(size=(M, D))
adam_learning_rate = 0.01
iterations = notebook_niter(5)
def make_matern_kernel():
return gpflow.kernels.Matern52(D)
Below we will demonstrate how natural gradients can turn VGP into GPR in a single step, if the likelihood is Gaussian.
Let's start by first creating a standard GPR model with Gaussian likelihood:
gpr = GPR(X, Y, kern=make_matern_kernel())
The likelihood of the exact GP model is:
gpr.compute_log_likelihood()
-231.0899
Now we will create an approximate model which approximates the true posterior via a variational Gaussian distribution.
We initialize the distribution to be zero mean and unit variance.
vgp = VGP(X, Y, kern=make_matern_kernel(), likelihood=gpflow.likelihoods.Gaussian())
The likelihood of the approximate GP model is:
vgp.compute_log_likelihood()
-328.8438
Obviously, our initial guess for the variational distribution is not correct, which results in a lower bound to the likelihood of the exact GPR model. We can optimize the variational parameters in order to get a tighter bound.
In fact, we only need to take 1 step in the natural gradient direction to recover the exact posterior:
natgrad_optimizer = NatGradOptimizer(gamma=1.)
natgrad_tensor = natgrad_optimizer.make_optimize_tensor(vgp, var_list=[(vgp.q_mu, vgp.q_sqrt)])
session = gpflow.get_default_session()
session.run(natgrad_tensor)
# update the cache of the variational parameters in the current session
vgp.anchor(session)
The likelihood of the approximate GP model after a single natgrad step:
vgp.compute_log_likelihood()
-231.0906
In the Gaussian likelihood case we can iterate between an Adam update for the hyperparameters and a NatGrad update for the variational parameters. That way, we achieve optimization of hyperparameters as if the model were a GPR.
The trick is to forbid Adam from updating the variational parameters by setting them to not trainable.
# Stop Adam from optimizing the variational parameters
vgp.q_mu.trainable = False
vgp.q_sqrt.trainable = False
# Create Adam tensors for each model
adam_for_vgp_tensor = AdamOptimizer(learning_rate=adam_learning_rate).make_optimize_tensor(vgp)
adam_for_gpr_tensor = AdamOptimizer(learning_rate=adam_learning_rate).make_optimize_tensor(gpr)
variational_params = [(vgp.q_mu, vgp.q_sqrt)]
natgrad_tensor = NatGradOptimizer(gamma=1.).make_optimize_tensor(vgp, var_list=variational_params)
for i in range(iterations):
session.run(adam_for_gpr_tensor)
iteration = i + 1
likelihood = session.run(gpr.likelihood_tensor)
print(f'GPR with Adam: iteration {iteration} likelihood {likelihood:.04f}')
# Update the cache of the parameters in the current session
gpr.anchor(session)
GPR with Adam: iteration 1 likelihood -230.6706 GPR with Adam: iteration 2 likelihood -230.2508 GPR with Adam: iteration 3 likelihood -229.8303 GPR with Adam: iteration 4 likelihood -229.4093 GPR with Adam: iteration 5 likelihood -228.9876
for i in range(iterations):
session.run(adam_for_vgp_tensor)
session.run(natgrad_tensor)
iteration = i + 1
likelihood = session.run(vgp.likelihood_tensor)
print(f'VGP with natural gradients and Adam: iteration {iteration} likelihood {likelihood:.04f}')
# We need to alter their trainable status in order to correctly anchor them in the current session
vgp.q_mu.trainable = True
vgp.q_sqrt.trainable = True
# Update the cache of the parameters (including the variational) in the current session
vgp.anchor(session)
VGP with natural gradients and Adam: iteration 1 likelihood -230.6713 VGP with natural gradients and Adam: iteration 2 likelihood -230.2514 VGP with natural gradients and Adam: iteration 3 likelihood -229.8310 VGP with natural gradients and Adam: iteration 4 likelihood -229.4099 VGP with natural gradients and Adam: iteration 5 likelihood -228.9882
Compare GPR and VGP lengthscales after optimisation:
print(f'GPR lengthscales = {gpr.kern.lengthscales.value:.04f}')
print(f'VGP lengthscales = {vgp.kern.lengthscales.value:.04f}')
GPR lengthscales = 0.9686 VGP lengthscales = 0.9686
Similarly, natural gradients turn SVGP into SGPR in the Gaussian likelihood case.
We can again combine natural gradients with Adam to update both variational parameters and hyperparameters too.
Here we'll just do a single natural step demonstration.
svgp = SVGP(X, Y, kern=make_matern_kernel(), likelihood=gpflow.likelihoods.Gaussian(), Z=Z)
sgpr = SGPR(X, Y, kern=make_matern_kernel(), Z=Z)
for model in svgp, sgpr:
model.likelihood.variance = 0.1
Analytically optimal sparse model likelihood:
sgpr.compute_log_likelihood()
-281.6273
SVGP likelihood before natural gradient step:
svgp.compute_log_likelihood()
-1404.0805
natgrad_tensor = NatGradOptimizer(gamma=1.).make_optimize_tensor(svgp, var_list=[(svgp.q_mu, svgp.q_sqrt)])
session = gpflow.get_default_session()
session.run(natgrad_tensor)
# Update the cache of the variational parameters in the current session
svgp.anchor(session)
SVGP likelihood after a single natural gradient step:
svgp.compute_log_likelihood()
-281.6273
A crucial property of the natural gradient method is that it still works with minibatches. In practice though, we need to use a smaller gamma.
svgp = SVGP(X, Y, kern=make_matern_kernel(),
likelihood=gpflow.likelihoods.Gaussian(), Z=Z, minibatch_size=50)
svgp.likelihood.variance = 0.1
variational_params = [(svgp.q_mu, svgp.q_sqrt)]
natgrad = NatGradOptimizer(gamma=.1)
natgrad_tensor = natgrad.make_optimize_tensor(svgp, var_list=variational_params)
for _ in range(notebook_niter(100)):
session.run(natgrad_tensor)
svgp.anchor(session)
Minibatch SVGP likelihood after NatGrad optimization:
np.average([svgp.compute_log_likelihood() for _ in notebook_range(1000)])
-282.2219
Compared to SVGP with ordinary gradients with minibatches, the natural gradient optimizer is much faster in the Gaussian case.
Here we'll do hyperparameter learning together with optimization of the variational parameters, comparing the interleaved natural gradient approach and the one using ordinary gradients for the hyperparameters and variational parameters jointly.
Note that again we need to compromise for smaller gamma value, which we'll keep fixed during the optimisation.
svgp_ordinary = SVGP(X, Y,
kern=make_matern_kernel(),
likelihood=gpflow.likelihoods.Gaussian(),
Z=Z,
minibatch_size=50)
svgp_natgrad = SVGP(X, Y,
kern=make_matern_kernel(),
likelihood=gpflow.likelihoods.Gaussian(),
Z=Z,
minibatch_size=50)
# ordinary gradients with Adam for SVGP
adam = AdamOptimizer(adam_learning_rate)
adam_for_svgp_ordinary_tensor = adam.make_optimize_tensor(svgp_ordinary)
# NatGrads and Adam for SVGP
# Stop Adam from optimizing the variational parameters
svgp_natgrad.q_mu.trainable = False
svgp_natgrad.q_sqrt.trainable = False
# Create the optimize_tensors for SVGP
adam = AdamOptimizer(adam_learning_rate)
adam_for_svgp_natgrad_tensor = adam.make_optimize_tensor(svgp_natgrad)
natgrad = NatGradOptimizer(gamma=.1)
variational_params = [(svgp_natgrad.q_mu, svgp_natgrad.q_sqrt)]
natgrad_tensor = natgrad.make_optimize_tensor(svgp_natgrad, var_list=variational_params)
Let's optimise the models now:
# Optimize svgp_ordinary
for _ in range(notebook_niter(100)):
session.run(adam_for_svgp_ordinary_tensor)
svgp_ordinary.anchor(session)
# Optimize svgp_natgrad
for _ in range(notebook_niter(100)):
session.run(adam_for_svgp_natgrad_tensor)
session.run(natgrad_tensor)
svgp_natgrad.anchor(session)
SVGP likelihood after ordinary Adam optimization:
np.average([svgp_ordinary.compute_log_likelihood() for _ in notebook_range(1000)])
-207.4970
SVGP likelihood after NatGrad and Adam optimization:
np.average([svgp_natgrad.compute_log_likelihood() for _ in notebook_range(1000)])
-197.0681
Y_binary = np.random.choice([1., -1], size=X.shape)
vgp_bernoulli = VGP(X, Y_binary, kern=make_matern_kernel(), likelihood=gpflow.likelihoods.Bernoulli())
vgp_bernoulli_natgrad = VGP(X, Y_binary, kern=make_matern_kernel(), likelihood=gpflow.likelihoods.Bernoulli())
# ordinary gradients with Adam for VGP with Bernoulli likelihood
adam = AdamOptimizer(adam_learning_rate)
adam_for_vgp_bernoulli_tensor = adam.make_optimize_tensor(vgp_bernoulli)
# NatGrads and Adam for VGP with Bernoulli likelihood
# Stop Adam from optimizing the variational parameters
vgp_bernoulli_natgrad.q_mu.trainable = False
vgp_bernoulli_natgrad.q_sqrt.trainable = False
# Create the optimize_tensors for VGP with natural gradients
adam = AdamOptimizer(adam_learning_rate)
adam_for_vgp_bernoulli_natgrad_tensor = adam.make_optimize_tensor(vgp_bernoulli_natgrad)
natgrad = NatGradOptimizer(gamma=.1)
variational_params = [(vgp_bernoulli_natgrad.q_mu, vgp_bernoulli_natgrad.q_sqrt)]
natgrad_tensor = natgrad.make_optimize_tensor(vgp_bernoulli_natgrad, var_list=variational_params)
# Optimize vgp_bernoulli
for _ in range(notebook_niter(100)):
session.run(adam_for_vgp_bernoulli_tensor)
vgp_bernoulli.anchor(session)
# Optimize vgp_bernoulli_natgrad
for _ in range(notebook_niter(100)):
session.run(adam_for_vgp_bernoulli_natgrad_tensor)
session.run(natgrad_tensor)
vgp_bernoulli_natgrad.anchor(session)
VGP likelihood after ordinary Adam optimization:
vgp_bernoulli.compute_log_likelihood()
-146.1206
VGP likelihood after NatGrad + Adam optimization:
vgp_bernoulli_natgrad.compute_log_likelihood()
-143.9411
We can also choose to run natural gradients in another parameterization.
The sensible choice is the model parameters (q_mu, q_sqrt), which is already in gpflow.
vgp_bernoulli_natgrads_xi = VGP(X, Y_binary,
kern=make_matern_kernel(),
likelihood=gpflow.likelihoods.Bernoulli())
var_list = [(vgp_bernoulli_natgrads_xi.q_mu, vgp_bernoulli_natgrads_xi.q_sqrt, XiSqrtMeanVar())]
# Stop Adam from optimizing the variational parameters
vgp_bernoulli_natgrads_xi.q_mu.trainable = False
vgp_bernoulli_natgrads_xi.q_sqrt.trainable = False
# Create the optimize_tensors for VGP with Bernoulli likelihood
adam = AdamOptimizer(adam_learning_rate)
adam_for_vgp_bernoulli_natgrads_xi_tensor = adam.make_optimize_tensor(vgp_bernoulli_natgrads_xi)
natgrad = NatGradOptimizer(gamma=.01)
natgrad_tensor = natgrad.make_optimize_tensor(vgp_bernoulli_natgrads_xi, var_list=var_list)
# Optimize vgp_bernoulli_natgrads_xi
for _ in range(notebook_niter(100)):
session.run(adam_for_vgp_bernoulli_natgrads_xi_tensor)
session.run(natgrad_tensor)
vgp_bernoulli_natgrads_xi.anchor(session)
VGP likelihood after NatGrads with XiSqrtMeanVar + Adam optimization:
vgp_bernoulli_natgrads_xi.compute_log_likelihood()
-143.9014
With sufficiently small steps, it shouldn't make a difference which transform is used, but for large steps this can make a difference in practice.