Multimetric Optimization

In many applications, it may be necessary to consider multiple competing metrics which have optimal values for different parameters. SigOpt enables this through Multimetric Experiments.

How does Multimetric Optimization work?

The contour plots below depict two competing metrics, where parameter values of

`x1`

and `x2`

cannot simultaneously meet the optima for both metrics `f1`

and `f2`

. For example, it can be extremely challenging to maximize both model performance and minimize training time. SigOptâ€™s Multimetric Optimization tries to solve this problem by finding sets of feasible or optimal values. The result of a Multimetric Experiment is a Pareto Frontier.â€‹Could not load image

The Pareto Frontier

Below is an example of possible outcomes of a Multimetric Experiment involving the metrics from the contour plots above.

â€‹

Could not load image

The graph on the left is the resulting feasible region and Pareto Frontier. In this figure, the blue circles are the most efficient points, the red circles represent points not falling on the optimal frontier, and the black dots illustrate the outline of the region. The blue circles create the optimal set of results, where one metric cannot be improved without another metric suffering. Each of the blue points in the left graph corresponds to an efficient parameter choice represented by a blue point in the right graph.

Interpreting the Solution

Defining a Multimetric Function in SigOpt

A SigOpt Multimetric Experiment can be conducted to explore the optimal values achievable for both metrics. We define these metrics using the python client and/or the .yml-format in the code block below, along with the associated experiment metadata which will be used to define the SigOpt Experiment. Notice that, unlike for single metric functions, this multimetric function returns a set of optimal values which contain the name of the metric and the associated value for that metric.

Multimetric Code Example

Python

YAML

1

sigopt.create_experiment(

2

name="Multimetric optimization",

3

project="sigopt-examples",

4

type="offline",

5

parameters=[

6

dict(

7

name="hidden_layer_size",

8

type="int",

9

bounds=dict(

10

min=32,

11

max=128

12

)

13

),

14

dict(

15

name="activation_fn",

16

type="categorical",

17

categorical_values=[

18

"relu",

19

"tanh"

20

]

21

)

22

],

23

metrics=[

24

dict(

25

name="holdout_accuracy",

26

strategy="optimize",

27

objective="maximize"

28

),

29

dict(

30

name="inference_time",

31

strategy="optimize",

32

objective="minimize"

33

)

34

],

35

parallel_bandwidth=1,

36

budget=30

37

)

Copied!

1

name: Multimetric optimization

2

project: sigopt-examples

3

type: offline

4

parameters:

5

- name: hidden_layer_size

6

type: int

7

bounds:

8

min: 32

9

max: 128

10

- name: activation_fn

11

type: categorical

12

categorical_values:

13

- relu

14

- tanh

15

metrics:

16

- name: holdout_accuracy

17

strategy: optimize

18

objective: maximize

19

- name: inference_time

20

strategy: optimize

21

objective: minimize

22

parallel_bandwidth: 1

23

budget: 30

Copied!

SigOpt will find trade-offs in each of your metrics in order to find the efficient frontier. If you want to quantify how much of a trade-off youâ€™re willing to make, check out the Metric Thresholds feature.

Limitations

`budget`

must be set when a Multimetric Experiment is created- The maximum number of optimized metrics is 2

Last modified 1mo ago