Multimetric Optimization

In many applications, it may be necessary to consider multiple competing metrics which have optimal values for different parameters. SigOpt enables this through Multimetric Experiments.

How does Multimetric Optimization work?

The contour plots below depict two competing metrics, where parameter values of

`x1`

and `x2`

cannot simultaneously meet the optima for both metrics `f1`

and `f2`

. For example, it can be extremely challenging to maximize both model performance and minimize training time. SigOptâ€™s Multimetric Optimization tries to solve this problem by finding sets of feasible or optimal values. The result of a Multimetric Experiment is a Pareto Frontier.â€‹Could not load image

The Pareto Frontier

Below is an example of possible outcomes of a Multimetric Experiment involving the metrics from the contour plots above.

â€‹

Could not load image

The graph on the left is the resulting feasible region and Pareto Frontier. In this figure, the blue circles are the most efficient points, the red circles represent points not falling on the optimal frontier, and the black dots illustrate the outline of the region. The blue circles create the optimal set of results, where one metric cannot be improved without another metric suffering. Each of the blue points in the left graph corresponds to an efficient parameter choice represented by a blue point in the right graph.

Interpreting the Solution

Defining a Multimetric Function in SigOpt

A SigOpt Multimetric Experiment can be conducted to explore the optimal values achievable for both metrics. We define these metrics using the python client and/or the .yml-format in the code block below, along with the associated experiment metadata which will be used to define the SigOpt Experiment. Notice that, unlike for single metric functions, this multimetric function returns a set of optimal values which contain the name of the metric and the associated value for that metric.

Multimetric Code Example

Python

YAML

sigopt.create_experiment(

name="Multimetric optimization",

project="sigopt-examples",

type="offline",

parameters=[

dict(

name="hidden_layer_size",

type="int",

bounds=dict(

min=32,

max=128

)

),

dict(

name="activation_fn",

type="categorical",

categorical_values=[

"relu",

"tanh"

]

)

],

metrics=[

dict(

name="holdout_accuracy",

strategy="optimize",

objective="maximize"

),

dict(

name="inference_time",

strategy="optimize",

objective="minimize"

)

],

parallel_bandwidth=1,

budget=30

)

name: Multimetric optimization

project: sigopt-examples

type: offline

parameters:

- name: hidden_layer_size

type: int

bounds:

min: 32

max: 128

- name: activation_fn

type: categorical

categorical_values:

- relu

- tanh

metrics:

- name: holdout_accuracy

strategy: optimize

objective: maximize

- name: inference_time

strategy: optimize

objective: minimize

parallel_bandwidth: 1

budget: 30

SigOpt will find trade-offs in each of your metrics in order to find the efficient frontier. If you want to quantify how much of a trade-off youâ€™re willing to make, check out the Metric Thresholds feature.

Limitations

`budget`

must be set when a Multimetric Experiment is created- The maximum number of optimized metrics is 2

Last modified 3mo ago