Metric Failure
Last updated
Last updated
If SigOpt makes a that is not feasible, you can , which tells us that this Suggestion led to a metric failure. As you report more of these failed Observations, our internal optimization algorithms will figure out the feasible region and only recommend points there.
Here are some examples when to report metric failure:
A neural network architecture fails to converge,
A chemical mixture that is known to lead to undesirable results,
The Assignments are simply not in the domain of the function you're trying to optimize.
If an infeasible region of the parameter space is known beforehand, it may be possible to predefine with. In situations in which feasibility is defined through thresholding on auxiliary non-optimized metric values, it may be more beneficial to use .
Note that a failed Observation should be reported only if obtaining an evaluation metric was not possible because of the Assignments themselves. If a certain parameter configuration for a convolutional neural network led to a Python out-of-memory error because the filter size and number of layers interacted in a certain way to make the network architecture too large, it is appropriate to report a failed Observation. If model training abruptly stops because a machine randomly fails, it would not be appropriate to report a failed Observation. In that case, we recommend or .
Reporting failed Observations is as simple as setting a flag in the Observation Create call.
Use the following python command to indicate that a Run has failed for any reason.
The complexity of failures and the tightness of your impact the speed at which SigOpt will learn to avoid failures. We recommend slightly increasing observation budget for experiments with a non-trivial number of failed Observations.