AI Experiments

Optimize Your Model

A key component of the SigOpt Platform is the ability to go from tracking your model with SigOpt Runs to optimizing that very same model with minimal changes to your code.

At a high level, a SigOpt AI Experiment is a grouping of SigOpt Runs and is defined by user-defined parameter and metric spaces. A SigOpt AI Experiment has a budget that is used to determine the number of hyperparameter tuning loops to conduct. Each hyperparameter loop produces a SigOpt Run with suggested assignments for each parameter. Different sets of hyperparameter values are suggested by either SigOpt algorithms and/or the user with the goal of finding the optimal set(s) of hyperparameters. Overtime, when using the SigOpt Optimizer, you can expect your model to perform better on your metrics.

The Optimization Loop

There are 3 core steps in the optimization loop:

Create a SigOpt AI Experiment

experiment = sigopt.create_experiment(
  name="Keras Model Optimization (Python)",
  type="offline",
  parameters=[
    dict(name="hidden_layer_size", type="int", bounds=dict(min=32, max=128)),
    dict(name="activation_function", type="categorical", categorical_values=["relu", "tanh"]),
  ],
  metrics=[dict(name="holdout_accuracy", objective="maximize")],
  parallel_bandwidth=1,
  budget=30,
)

Iterate over your AI Experiment

You can iterate over your AI Experiment in 2 ways. Each optimization loop produces a SigOpt Run with suggested assignments for each parameter.

for run in experiment.loop():
  with run:
    # execute model
    # evaluate model
    # report metric values to SigOpt
while not experiment.is_finished():
  with experiment.create_run() as run:
    # execute model
    # evaluate model
    # report metric values to SigOpt

Report metric values to SigOpt

run.log_metric("holdout_accuracy", holdout_accuracy)

Putting It All Together

experiment = sigopt.create_experiment(
  name="Keras Model Optimization (Python)",
  type="offline",
  parameters=[
    dict(name="hidden_layer_size", type="int", bounds=dict(min=32, max=128)),
    dict(name="activation_function", type="categorical", categorical_values=["relu", "tanh"]),
  ],
  metrics=[dict(name="holdout_accuracy", objective="maximize")],
  parallel_bandwidth=1,
  budget=30,
)

for run in experiment.loop():
  with run:
    holdout_accuracy = execute_keras_model(run)
    run.log_metric("holdout_accuracy", holdout_accuracy)

# get the best Runs for the Experiment
best_runs = experiment.get_best_runs()

Last updated