OPTaaS can optimize multiple objectives within a single Task. Your scoring function should return a dictionary of scores for each objective. You can also optionally return a dictionary of variances for each objective (i.e. return a tuple of dictionaries).
We will use this multi-objective optimization example:
import math
from mindfoundry.optaas.client.parameter import FloatParameter
from mindfoundry.optaas.client.objective import Objective
from mindfoundry.optaas.client.goal import Goal
parameters=[
FloatParameter('x1', minimum=0, maximum=1),
FloatParameter('x2', minimum=0, maximum=1)
]
objectives = [
Objective("f1", goal=Goal.max), # or goal=Goal.min as appropriate
Objective("f2", goal=Goal.max) # you can also specify known_min_score and known_max_score
]
def scoring_function(x1, x2):
g = ((x1 - 0.5) ** 2) + ((x2 - 0.5) ** 2)
x1_pi2 = x1 * math.pi / 2
f1 = ((1 + g) * math.cos(x1_pi2))
f2 = ((1 + g) * math.sin(x1_pi2))
return {"f1": f1, "f2": f2}
from mindfoundry.optaas.client.client import OPTaaSClient
client = OPTaaSClient('https://optaas.mindfoundry.ai', '<Your OPTaaS API key>')
task = client.create_task(
title='Multi-objective Example',
parameters=parameters,
objectives=objectives,
initial_configurations=4
)
At the end we will retrieve the set of Pareto front Results. These are the Results where, for each objective, the score cannot be improved without reducing the score for another objective.
pareto_set = task.run(scoring_function, max_iterations=30)
Running task "Multi-objective Example" for 30 iterations Iteration: 0 Score: {'f1': 0.7071067811865476, 'f2': 0.7071067811865475} Configuration: {'x1': 0.5, 'x2': 0.5} Iteration: 1 Score: {'f1': 1.0393644740751975, 'f2': 0.430518861410726} Configuration: {'x1': 0.25, 'x2': 0.75} Iteration: 2 Score: {'f1': 0.43051886141072604, 'f2': 1.0393644740751975} Configuration: {'x1': 0.75, 'x2': 0.25} Iteration: 3 Score: {'f1': 0.8574530376869998, 'f2': 0.5729318028014647} Configuration: {'x1': 0.375, 'x2': 0.375} Iteration: 4 Score: {'f1': 4.118843994562698e-08, 'f2': 1.311068760938587} Configuration: {'x1': 0.99999998, 'x2': 0.7471209844157051} Iteration: 5 Score: {'f1': 0.09592193816780585, 'f2': 1.4567384014126048} Configuration: {'x1': 0.9581408891314421, 'x2': 0.0} Iteration: 6 Score: {'f1': 1.1752659197219903, 'f2': 0.5522576509385636} Configuration: {'x1': 0.2796537691999611, 'x2': 0.0} Iteration: 7 Score: {'f1': 0.4164848656933371, 'f2': 1.2727990678700287} Configuration: {'x1': 0.7986765186547115, 'x2': 0.99999998} Iteration: 8 Score: {'f1': 0.6372316613982189, 'f2': 1.1079497568369399} Configuration: {'x1': 0.6677206544264276, 'x2': 0.0} Iteration: 9 Score: {'f1': 0.7853418846852135, 'f2': 0.9786915387002382} Configuration: {'x1': 0.5695002799034684, 'x2': 0.99999998} Iteration: 10 Score: {'f1': 0.2996334146590194, 'f2': 1.3471648754563859} Configuration: {'x1': 0.860672395860025, 'x2': 0.99999998} Iteration: 11 Score: {'f1': 0.27096842792228526, 'f2': 1.364096007108697} Configuration: {'x1': 0.8751648295134814, 'x2': 0.0} Iteration: 12 Score: {'f1': 0.6765313066411998, 'f2': 1.0751481701020844} Configuration: {'x1': 0.6424451308839596, 'x2': 0.99999998} Iteration: 13 Score: {'f1': 0.7795915234218058, 'f2': 0.9840004819662557} Configuration: {'x1': 0.5734595718837381, 'x2': 0.0} Iteration: 14 Score: {'f1': 0.7788805899434732, 'f2': 0.9846551702225278} Configuration: {'x1': 0.5739482921665194, 'x2': 0.0} Iteration: 15 Score: {'f1': 0.27668935894543406, 'f2': 1.3607574498971329} Configuration: {'x1': 0.8722940463621401, 'x2': 0.99999998} Iteration: 16 Score: {'f1': 0.4113293194354817, 'f2': 1.2762642468118035} Configuration: {'x1': 0.8015144651650478, 'x2': 0.0} Iteration: 17 Score: {'f1': 1.16208933052274, 'f2': 0.5693873859019918} Configuration: {'x1': 0.2900378594243644, 'x2': 0.99999998} Iteration: 18 Score: {'f1': 0.49868273799395746, 'f2': 1.2152080249424346} Configuration: {'x1': 0.7520923609477937, 'x2': 0.99999998} Iteration: 19 Score: {'f1': 0.823213807070615, 'f2': 0.9431211120917674} Configuration: {'x1': 0.5431506072927972, 'x2': 0.0} Iteration: 20 Score: {'f1': 0.8270894739656314, 'f2': 0.9394209151689157} Configuration: {'x1': 0.5404278743492107, 'x2': 0.99999998} Iteration: 21 Score: {'f1': 0.6831757365569862, 'f2': 1.0694969008566544} Configuration: {'x1': 0.6381142020364394, 'x2': 0.0} Iteration: 22 Score: {'f1': 0.8370902299430105, 'f2': 0.9298208432046696} Configuration: {'x1': 0.533380305465506, 'x2': 0.0} Iteration: 23 Score: {'f1': 0.7429468741187083, 'f2': 1.0172717043616415} Configuration: {'x1': 0.5984235477977614, 'x2': 0.99999998} Iteration: 24 Score: {'f1': 0.6740670480706229, 'f2': 1.0772363132257081} Configuration: {'x1': 0.6440471663477743, 'x2': 0.99999998} Iteration: 25 Score: {'f1': 0.5965258202153979, 'f2': 1.1408083469630266} Configuration: {'x1': 0.6932782772637422, 'x2': 0.0} Iteration: 26 Score: {'f1': 0.30016699935835744, 'f2': 1.3468449043351842} Configuration: {'x1': 0.8604000451071535, 'x2': 0.0} Iteration: 27 Score: {'f1': 0.16080975748499476, 'f2': 1.4245097056049982} Configuration: {'x1': 0.9284363664329726, 'x2': 0.99999998} Iteration: 28 Score: {'f1': 0.6050539453579511, 'f2': 1.1340177648914858} Configuration: {'x1': 0.6879766387152254, 'x2': 0.99999998} Iteration: 29 Score: {'f1': 0.6799645231375991, 'f2': 1.0722319646634353} Configuration: {'x1': 0.6402093797980355, 'x2': 0.0} Task Completed
%matplotlib inline
from matplotlib import pyplot as plt
all_results = task.get_results()
all_f1_scores = [result.score['f1'] for result in all_results]
all_f2_scores = [result.score['f2'] for result in all_results]
ordered_pareto_set = sorted(pareto_set, key=lambda result: result.score['f1'])
pareto_f1_scores = [result.score['f1'] for result in ordered_pareto_set]
pareto_f2_scores = [result.score['f2'] for result in ordered_pareto_set]
plt.figure(figsize=(16,8))
plt.plot(all_f1_scores, all_f2_scores, 'xb', label='All Results')
plt.plot(pareto_f1_scores, pareto_f2_scores, 'r', label='Pareto Front')
plt.xlabel('f1')
plt.ylabel('f2')
plt.legend()
plt.show()
all_x1_values = [result.configuration.values['x1'] for result in all_results]
all_x2_values = [result.configuration.values['x2'] for result in all_results]
pareto_x1_values = [result.configuration.values['x1'] for result in pareto_set]
pareto_x2_values = [result.configuration.values['x2'] for result in pareto_set]
plt.figure(figsize=(16,8))
plt.plot(all_x1_values, all_x2_values, 'ob', label='All Results')
plt.plot(pareto_x1_values, pareto_x2_values, '.r', label='Pareto Set')
plt.xlabel('x1')
plt.ylabel('x2')
plt.legend()
plt.show()