Metric Failure
If SigOpt makes a Suggestion that is not feasible, you can report a failed Observation, which tells us that this Suggestion led to a metric failure. As you report more of these failed Observations, our internal optimization algorithms will figure out the feasible region and only recommend points there.
Here are some examples when to report metric failure:
  • A neural network architecture fails to converge,
  • A chemical mixture that is known to lead to undesirable results,
  • The Assignments are simply not in the domain of the function you're trying to optimize.

Alternatives to Metric Failure

If an infeasible region of the parameter space is known beforehand, it may be possible to predefine with Parameter Constraints. In situations in which feasibility is defined through thresholding on auxiliary non-optimized metric values, it may be more beneficial to use Metric Constraints.
Note that a failed Observation should be reported only if obtaining an evaluation metric was not possible because of the Assignments themselves. If a certain parameter configuration for a convolutional neural network led to a Python out-of-memory error because the filter size and number of layers interacted in a certain way to make the network architecture too large, it is appropriate to report a failed Observation. If model training abruptly stops because a machine randomly fails, it would not be appropriate to report a failed Observation. In that case, we recommend deleting the Suggestion or reevaluate the open suggestion.

Reporting Failure

Core Module

Reporting failed Observations is as simple as setting a flag in the Observation Create call.
Python
Bash
Java
from sigopt import Connection
conn = Connection(client_token="USER_TOKEN")
observation = conn.experiments(EXPERIMENT_ID).observations().create(
failed=True,
suggestion="SUGGESTION_ID"
)
OBSERVATION=`curl -s -X POST https://api.sigopt.com/v1/experiments/EXPERIMENT_ID/observations -u "$SIGOPT_API_TOKEN": \
-H 'Content-Type: application/json' \
-d "{\"suggestion\":\"SUGGESTION_ID\",\"failed\":true}"`
import com.sigopt.SigOpt;
import com.sigopt.exception.SigoptException;
import com.sigopt.model.*;
import java.util.Arrays;
public class YourSigoptExperiment {
public static Experiment createExperiment() throws SigoptException {
Observation observation = new Experiment(EXPERIMENT_ID).observations().create()
.data(
new Observation.Builder()
.failed(true)
.suggestion("SUGGESTION_ID")
.build()
)
.call();
return experiment;
}

AI Module

Use the following python command to indicate that a Run has failed for any reason.
Python
sigopt.log_failure()
The complexity of failures and the tightness of your Parameter Bounds impact the speed at which SigOpt will learn to avoid failures. We recommend slightly increasing observation budget for experiments with a non-trivial number of failed Observations.
Copy link
On this page
Alternatives to Metric Failure
Reporting Failure
Core Module
AI Module