Bayesian optimization with context variables

In this notebook we are going to see how to use Emukit to solve optimization problems in which certain variables are fixed during the optimization phase. These are called context variables [1]. This is useful when some of the variables in the optimization are controllable/known factors. And example is the optimization of a the movement of a robot under conditions of the environment change (but the change is known).

In [1]:
from emukit.test_functions import branin_function
from emukit.core import ParameterSpace, ContinuousParameter, DiscreteParameter
from emukit.experimental_design.model_free.random_design import RandomDesign
from GPy.models import GPRegression
from emukit.model_wrappers import GPyModelWrapper
from emukit.bayesian_optimization.acquisitions import ExpectedImprovement
from emukit.bayesian_optimization.loops import BayesianOptimizationLoop
from emukit.core.loop import FixedIterationsStoppingCondition

Loading the problem and the loop

In [2]:
f, _ = branin_function()

Now we define the domain of the function to optimize.

In [3]:
parameter_space = ParameterSpace([ContinuousParameter('x1', -5, 10),
                                  ContinuousParameter('x2', 0, 15)])

We build the model:

In [4]:
design = RandomDesign(parameter_space) # Collect random points
X = design.get_samples(10)
Y = f(X)
model_gpy = GPRegression(X,Y) # Train and wrap the model in Emukit
model_emukit = GPyModelWrapper(model_gpy)

And prepare the optimization object to run the loop.

In [5]:
expected_improvement = ExpectedImprovement(model = model_emukit)
bayesopt_loop = BayesianOptimizationLoop(model = model_emukit,
                                         space = parameter_space,
                                         acquisition = expected_improvement,
                                         batch_size = 1)

Now, we set the number of iterations to run to 10.

In [6]:
max_iter = 10

Running the optimization by setting a context variable

To set a context, we just need to create a dictionary with the variables to fix and pass it to the Bayesian optimization object when running the optimization. Note that, every time we run new iterations we can set other variables to be the context. We run 3 sequences of 10 iterations each with different values of the context.

In [7]:
bayesopt_loop.run_loop(f, max_iter, context={'x1':0.3}) # we set x1 as the context variable
bayesopt_loop.run_loop(f, max_iter, context={'x2':0.1}) # we set x2 as the context variable
bayesopt_loop.run_loop(f, max_iter) # no context

We can now inspect the collected points.

In [8]:
bayesopt_loop.loop_state.X
Out[8]:
array([[ 9.13459515,  2.66235976],
       [ 3.96697599, 11.95100624],
       [-4.17885102,  4.85113642],
       [ 0.32507059,  0.08922947],
       [ 5.64977093, 13.01495774],
       [ 6.50174329, 11.99569935],
       [ 0.05185026,  8.54957567],
       [ 8.76057361,  0.56603508],
       [ 1.00369808, 13.77609726],
       [ 5.70878062, 11.08138747],
       [ 0.3       ,  5.11611766],
       [ 0.3       ,  5.88497906],
       [ 0.3       ,  5.9823043 ],
       [ 0.3       ,  6.17679176],
       [ 0.3       ,  6.29604643],
       [ 0.3       ,  6.19923853],
       [ 0.3       ,  4.54870967],
       [ 0.3       ,  6.13726925],
       [ 0.3       ,  6.17352127],
       [ 0.3       , 14.99158071],
       [10.        ,  0.1       ],
       [ 4.3965671 ,  0.1       ],
       [ 6.82134636,  0.1       ],
       [ 2.83010062,  0.1       ],
       [-5.        ,  0.1       ],
       [ 9.13361485,  0.1       ],
       [-2.32189126,  0.1       ],
       [ 8.41142546,  0.1       ],
       [ 8.42345634,  0.1       ],
       [-4.59246209,  0.1       ],
       [-5.        , 11.87026862],
       [10.        ,  7.80138872],
       [ 3.83576952,  3.12647385],
       [-5.        , 15.        ],
       [10.        , 15.        ],
       [ 3.2292838 ,  1.80201788],
       [ 7.74960871,  3.84223718],
       [-5.        ,  8.64581695],
       [10.        ,  4.60904445],
       [10.        ,  2.63352077]])

References

  • [1] Krause, A. & Ong, C. S. Contextual gaussian process bandit optimization Advances in Neural Information Processing Systems (NIPS), 2011, 2447-2455*