Links

Bring Your Own Optimizer

At SigOpt, we have invested thousands of hours in developing a powerful optimization engine to efficiently conduct the hyperparameter search process. We understand, however, that some users prefer to bring their own strategies to the model development process — SigOpt fully supports this. In this page, and in the notebook linked below, we explain how you can use your own optimization tool and store the progress in SigOpt to inspect and visualize on our web dashboard.

Show me the code

View sample Bring Your Own Optimizer implementations in a notebook
See this notebook for a complete demonstration for Manual Search, Grid Search, Random Search, Optuna and Hyperopt. For more notebook instructions and tutorials, check out our GitHub notebook tutorials repo.
The content below on this page previews the visualizations generated by that notebook, as well as brief code snippets.

Goal: Visualize and store results from different optimization approaches

You will see how to track each iteration of multiple optimization loops - each using a different optimization approach - as a SigOpt run, and group all of the runs into a SigOpt project. You can then visualize the results of all the runs in the project together without writing any plotting code. Note that if you have custom plots in your code you can attach these to runs and store them on the SigOpt platform with any other metadata you decide is relevant to track as a run artifact - for more detail please see the runs API reference.

Logging run artifacts with SigOpt

In the notebook linked above you will find a class structured like the following pseudocode to instantiate data objects to manage training runs across optimization approaches. These training runs will be created during each iteration of the various optimization loops executed for each global optimization strategy we invoke. Please see here to learn more about experiment management in SigOpt:
class Run(object):
def __init__(self, run_type):
...
self.run_type = run_type
def execute(self, args):
with sigopt.create_run(name=self.run_type) as run:
# get data
...
# register hyperparameter values with sigopt
...
# log metadata about source of hyperparameter suggestion
...
# instantiate, train, and evaluate model
...
# log metric results to sigopt run
...
Random search
Grid search
Hyperopt
Optuna
To create a customized random process for generating parameter configurations, SigOpt allows assigning user-generated values (randomly generated in this case) for each parameter. Using the API in this way allows you to run your random process for as many iterations as you wish during an Experiment, and then to continue the Experiment with other optimization approaches.
for _ in range(BUDGET):
args = dict()
args["parameter_0"] = numpy.random.randint(low=32, high=512)
args["parameter_1"] = "class-0" if numpy.random.random() >= 0.75 else "class-1"
random_run = Run(run_type="random search")
random_run.execute(args)
Another common approach to optimizing hyperparameters is to specify a grid of values, evaluate the learning algorithm with those parameters, and then observe results across the grid you defined.
for p0 in p0_grid_values:
for p1 in p1_grid_values:
args = dict(p0=p0, p1=p1)
grid_run = Run(run_type="grid search")
grid_run.execute(args)
Hyperopt was introduced by Bergstra et al. in 2013. Below you see how to use Hyperopt as an optimizer while leveraging the logging and visualization functionality of SigOpt.
def hyperopt_objective_function(args):
hyperopt_run = Run(run_type="hyperopt search")
metric_value = hyperopt_run.execute(args)
return metric_value
hyperopt_space = {...}
result = hyperopt.fmin(
fn=hyperopt_objective_function, space=hyperopt_space, algo=ALGORITHM, max_evals=BUDGET, show_progressbar=False
)
Optuna was introduced in 2019 by Takuya et al. at Preferred Networks. Using a code like the one below you can use Optuna as an optimizer while leveraging the logging and visualization functionality of SigOpt.
def optuna_objective_function(trial):
args = dict(
hidden_layer_size=trial.suggest_int("hidden_layer_size", 32, 512, 1),
activation_function=trial.suggest_categorical("activation_function", ["tanh", "relu"]),
)
optuna_run = Run(number_of_epochs=NUMBER_OF_EPOCHS, run_type="optuna search")
metric_value = optuna_run.execute(args)
return metric_value
study = optuna.create_study(direction="maximize")
study.optimize(optuna_objective_function, n_trials=BUDGET, show_progress_bar=False)