Metric Constraints
The Metric Constraints feature combines the ideas of Metric Strategy and Metric Thresholds to give you more control over where SigOpt searches for the desirable outcomes. For example, some stored metrics might be guardrail metrics, where you want the values for these metrics to satisfy some baseline values. In other cases, some Multimetric experiments can be rephrased as a constrained single metric experiment, where you want to optimize a primary metric, subject to the secondary metric exceeding a certain threshold.
Example: Optimizing Accuracy with Constraint on Inference Time
Recall our example of the Neural Network shown in the Metric Thresholds documentation. Instead of being interested in the tradeoff between accuracy and inference time, suppose we only want to optimize for the accuracy of the network. However, we are still concerned with the inference time, so we define the inference time as a Constraint Metric with a threshold of 100ms. SigOpt allows you to state limitations at experiment creation to help inform the optimizer of the practical restrictions in your problem.
Defining the Metric Constraints
To assign one of your metrics as a constraint metric, specify the strategy
field as constraint
in the desired Metric object when creating your experiment. You must specify a threshold
for the Constraint Metric. When the objective is set to minimize
, observations with constraint metric values lower than or equal to the threshold are considered feasible. When the objective is set to maximize
, observations with constraint metric values greater than or equal to the threshold are considered feasible.
Below, we create a new experiment, using the example of optimizing the accuracy of the network subject to inference time constraint.
Core Module
AI Module
Updating Constraints
Metric Constraint allows you to update the threshold
on the properties page of an experiment at any time while the experiment is in progress.
Core Module
For core module users, the thresholds can also be updated directly via the API. An example is given below.
Feasibility
If the threshold
defined for a Constraint Metric cannot be satisfied anywhere in the domain, this feature will perform unpredictably. For example, if you set the threshold for inference time to 0
(which is not actually possible), this feature will assume 0 ms is actually possible and explore the domain erratically trying to find solutions that satisfy this threshold. When defining constraints, it is best to state values at the start of an experiment which are well-understood based on previous experiments or prior knowledge.
Best Observations
Best Assignment List (Best Runs for AI module) will only consider results that satisfy all constraint metric thresholds. Additionally, it is possible that there will be no observation that satisfies all thresholds in the early stages of experiments, and thus no best observations. In this situation, Best Assignment List will return no best observations (an empty Pagination), and we recommend waiting until the experiment is further along to retrieve best assignments.
Recommendations
Set the experiment budget higher. SigOpt Experiments with Metric Constraints require additional data points to understand the associated feasible parameter space. We recommend adding 25% to your original budget for each additional constraint metric.
Parallelism is another powerful tool for accelerating tuning by testing multiple observations simultaneously. It can be used in conjunction with Metric Constraints, but each constraint metric requires extra time for SigOpt to compute the associated feasible parameter space. We recommend limiting parallelism to no more than 5 simultaneous observations.
Limitations
There can be up to 4 constraint metrics for an experiment.
Constraint metrics must define
threshold
values.All specified constraint metrics must be reported for successful observations.
No constraint metrics can be reported for Failed observations.
A budget must be set for an experiment with Metric Constraints.
Last updated