In Multimetric Experiments, SigOpt searches for the Pareto Frontier; these are the parameter assignments whose metric values cannot be improved uniformly. The final outcome of such an experiment is the subset of all observations which have efficient metric values. However, it may be the case that only a subset of the Pareto Frontier is actually relevant for a given application. Some points on the Pareto Frontier may not be viable because unique business requirements force one or both metrics to meet custom criteria.
The goal of Metric Thresholds is to allow you to provide a region of metric values which will define success for you. SigOpt’s Optimizer will use this information to focus on this region with the intent of better resolving the Pareto Frontier to satisfy these stated thresholds. If you want to place thresholds on non-optimized (guardrail) metrics, see Metric Constraints.
Example: Balancing Accuracy with Speed
Neural networks have great capacity to perform well in complicated machine learning situations; however, this may come at the cost of creating a very complex model with a slow inference time. We discussed this situation in an earlier blog, and used the Metric Thresholds feature to state these practical limitations at experiment creation to direct the SigOpt search on the regions of interest.
Defining the Metric Thresholds
To assign a threshold to one or more of your metrics, specify the threshold field in the desired Metric object when creating your experiment. You can specify a threshold for any number of metrics in your multimetric experiment. To assign thresholds after an experiment has been created, update the metrics field, as shown below. SigOpt will interpret the specified threshold in the following ways.
By default, SigOpt will assume you want to maximize each metric and try to find observations that are greater than or equal to the specified threshold.
If you provide an objective of “minimize” for a metric, SigOpt will try to find observations that are less than or equal to the specified threshold.
If the defined metric thresholds don’t feasibly exist anywhere in the domain, this feature will perform unpredictably. SigOpt does not know beforehand what is feasible, so setting unrealistic goals hinders our ability to exploit the information we gain to optimize towards the threshold you’ve specified. For example, if the accuracy threshold for a machine learning model is set to 984, when the actual accuracy exists between 0 and 1, this feature will assume the value 984 is actually achievable and explore the domain erratically trying to find solutions that satisfy this threshold.
When defining thresholds, it is best
to state well-established thresholds at the start of an experiment if you have the data, or
to update the experiment after some feasible observations have been observed.