AI Experiment Client Calls

sigopt.create_experiment(...)

Creates a new AI Experiment.

NameTypeRequired?Description

name

string

Yes

A user-specified name for this AI Experiment.

parameters

list of Parameters

Yes

A list of Parameter objects.

conditionals

list of conditionals

No

linear_constrain ts

list of parameter constraints

No

metadata

dictionary of {string: value}

No

Optional user-provided object. See Using Metadata for more information.

metrics

list of metrics

Yes

A list of Metric definitions to be optimized or stored for analysis. If the array is of length one, the standard optimization problem is conducted. This array can have no more than 2 optimized entries and no more than 50 entries in total.

num_solutions

int

No

The number of (diverse) solutions SigOpt will search for. This feature is only available for special plans, and does not need to be set unless the desired number of solutions is greater than 1. A budget is required if the number of solutions is greater than 1. No categorical variables are allowed in multiple solution experiments.

budget

int

No

The number of Runs you plan to create for this AI Experiment. This is required when the length of metrics is greater than 1, and optional for a single metric experiment. Deviating from this value, especially by failing to reach it, may result in suboptimal performance for your experiment.

parallel_bandwidth

int

No

The number of simultaneous Runs you plan to maintain during this experiment. The default value for this is 1, i.e., a sequential experiment. The maximum value for this is dependent on your plan. This field is optional, but setting it correctly may improve performance.

type

string

No

A type for this experiment. AI Experiments can span 3 types: offline, random, grid. offline experiments will use SigOpt’s Optimizer. random executes random search, and grid executes grid search

Example for creating an AI Experiment:

experiment = sigopt.create_experiment(
    name="Keras Model Optimization (Python)",
    type="offline",
    parameters=[
        dict(name="hidden_layer_size", type="int", bounds=dict(min=32, max=128)),
        dict(
            name="activation_fn",
            type="categorical",
            categorical_values=["relu", "tanh"],
        ),
    ],
    metrics=[dict(name="holdout_accuracy", objective="maximize")],
    parallel_bandwidth=1,
    budget=30,
)

Once you’ve created an AI Experiment, you are able to loop through an Experiment in two ways:

for run in experiment.loop():
  with run:
    ...
while not experiment.is_finished():
  with experiment.create_run() as run:
    ...

sigopt.get_experiment(experiment_id)

Retrieves an existing AI Experiment.

NameTypeRequired?Description

experiment_id

string

Yes

Returns a SigOpt AI Experiment object specified by the provided experiment_id.

experiment.create_run()

Creates a new Run in the AI Experiment. Returns a RunContext object to use for tracking Run attributes.

experiment.loop()

Start an AI Experiment loop. Returns an iterator of RunContext objects, used for tracking attributes of each Run in the AI Experiment. The iterator will terminate when the AI Experiment has consumed its entire budget.

experiment.is_finished()

Check if the AI Experiment has consumed its entire budget.

experiment.refresh()

Refresh the AI Experiment attributes.

experiment.get_runs()

Returns an iterator of all the TrainingRuns for an AI Experiment. Method applied to an instance of an AI Experiment object.

experiment.get_best_runs()

Returns an iterator of the best TrainingRuns for an AI Experiment. Method applied to an instance of an AI Experiment object.

experiment.update()

Update experiment parameters during execution.

NameTypeRequired?Description

parameters

list of Parameters

Yes

A list of Parameter objects

Example for updating parameter bounds within an AI Experiment

experiment = sigopt.create_experiment(…)
parameters = experiment.parameters
parameters[0].bounds.max = 100 
experiment.update(parameters=parameters)

experiment.archive()

Archives the AI Experiment. All associated Runs will not be archived and can be found on the Project Runs page. Method applied to an instance of an AI Experiment object.

Last updated