Experiment Client Calls

sigopt.create_experiment(...)

Creates a new Experiment.
Name
Type
Required?
Description
name
string
Yes
A user-specified name for this experiment.
parameters
list of parameters
Yes
A list of Parameter objects.
conditionals
list of conditionals
No
linear_constrain ts
list of parameter constraints
No
metadata
dictionary of {string: value}
No
Optional user-provided object. See Using Metadata for more information.
metrics
list of metrics
Yes
A list of Metric definitions to be optimized or stored for analysis. If the array is of length one, the standard optimization problem is conducted. This array can have no more than 2 optimized entries and no more than 50 entries in total.
num_solutions
int
No
The number of (diverse) solutions SigOpt will search for. This feature is only available for special plans, and does not need to be set unless the desired number of solutions is greater than 1. A budget is required if the number of solutions is greater than 1. No categorical variables are allowed in multiple solution experiments.
budget
int
No
The number of Runs you plan to create for this Experiment. This is required when the length of metrics is greater than 1, and optional for a single metric experiment. Deviating from this value, especially by failing to reach it, may result in suboptimal performance for your experiment.
parallel_bandwidth
int
No
The number of simultaneous Runs you plan to maintain during this experiment. The default value for this is 1, i.e., a sequential experiment. The maximum value for this is dependent on your plan. This field is optional, but setting it correctly may improve performance.
type
string
No
A type for this experiment. Experiments can span 3 types: “offline”, “random”, “grid”. “Offline” experiments will use SigOpt’s Optimizer. “Random” executes random search, and “grid” executes grid search
Example for creating an Experiment:
experiment = sigopt.create_experiment(
name="Keras Model Optimization (Python)",
type="offline",
parameters=[
dict(name="hidden_layer_size", type="int", bounds=dict(min=32, max=128)),
dict(
name="activation_fn",
type="categorical",
categorical_values=["relu", "tanh"],
),
],
metrics=[dict(name="holdout_accuracy", objective="maximize")],
parallel_bandwidth=1,
budget=30,
)
Once you’ve created an Experiment, you are able to loop through an Experiment in two ways:
for run in experiment.loop():
with run:
...
while not experiment.is_finished():
with experiment.create_run() as run:
...

sigopt.get_experiment(experiment_id)

Retrieves an existing Experiment.
Name
Type
Required?
Description
experiment_id
string
Yes
Returns a SigOpt Experiment object specified by the provided experiment_id.

experiment.create_run()

Creates a new Run in the Experiment. Returns a RunContext object to use for tracking Run attributes.

experiment.loop()

Start an Experiment loop. Returns an iterator of RunContext objects, used for tracking attributes of each Run in the experiment. The iterator will terminate when the Experiment has consumed its entire budget.

experiment.is_finished()

Check if the Experiment has consumed its entire budget.

experiment.refresh()

Refresh the Experiment attributes.

experiment.get_runs()

Returns an iterator of all the TrainingRuns for an Experiment. Method applied to an instance of an Experiment object.

experiment.get_best_runs()

Returns an iterator of the best TrainingRuns for an Experiment. Method applied to an instance of an Experiment object.

experiment.update()

Update experiment parameters during execution.
Name
Type
Required?
Description
parameters
list of parameters
Yes
A list of Parameter objects
Example for updating parameter bounds within an experiment:
experiment = sigopt.create_experiment()
parameters = experiment.parameters
parameters[0].bounds.max = 100
experiment.update(parameters=parameters)

experiment.archive()

Archives the Experiment. All associated Runs will not be archived and can be found on the Project Runs page. Method applied to an instance of an Experiment object.