A key component of the SigOpt Platform is the ability to go from tracking your model with to optimizing that very same model with minimal changes to your code.
At a high level, a SigOpt AI Experiment is a grouping of SigOpt Runs and is defined by user-defined parameter and metric spaces. A SigOpt AI Experiment has a budget that is used to determine the number of hyperparameter tuning loops to conduct. Each hyperparameter loop produces a SigOpt Run with suggested assignments for each parameter. Different sets of hyperparameter values are suggested by either SigOpt algorithms and/or the user with the goal of finding the optimal set(s) of hyperparameters. Overtime, when using the SigOpt Optimizer, you can expect your model to perform better on your metrics.
experiment = sigopt.create_experiment(
name="Keras Model Optimization (Python)",
type="offline",
parameters=[
dict(name="hidden_layer_size", type="int", bounds=dict(min=32, max=128)),
dict(name="activation_function", type="categorical", categorical_values=["relu", "tanh"]),
],
metrics=[dict(name="holdout_accuracy", objective="maximize")],
parallel_bandwidth=1,
budget=30,
)
for run in experiment.loop():
with run:
holdout_accuracy = execute_keras_model(run)
run.log_metric("holdout_accuracy", holdout_accuracy)
# get the best Runs for the Experiment
best_runs = experiment.get_best_runs()