Bring Your Own Optimizer

At SigOpt, we have invested thousands of hours in developing a powerful optimization engine to efficiently conduct the hyperparameter search process. We understand, however, that some users prefer to bring their own strategies to the model development process — SigOpt fully supports this. In this page, and in the notebook linked below, we explain how you can use your own optimization tool and store the progress in SigOpt to inspect and visualize on our web dashboard.

Show me the code

View sample Bring Your Own Optimizer implementations in a notebook

See this notebook for a complete demonstration for Manual Search, Grid Search, Random Search, Optuna and Hyperopt. For more notebook instructions and tutorials, check out our GitHub notebook tutorials repo.

The content below on this page previews the visualizations generated by that notebook, as well as brief code snippets.

Goal: Visualize and store results from different optimization approaches

You will see how to track each iteration of multiple optimization loops - each using a different optimization approach - as a SigOpt run, and group all of the runs into a SigOpt project. You can then visualize the results of all the runs in the project together without writing any plotting code. Note that if you have custom plots in your code you can attach these to runs and store them on the SigOpt platform with any other metadata you decide is relevant to track as a run artifact - for more detail please see the runs API reference.

Logging run artifacts with SigOpt

In the notebook linked above you will find a class structured like the following pseudocode to instantiate data objects to manage training runs across optimization approaches. These training runs will be created during each iteration of the various optimization loops executed for each global optimization strategy we invoke. Please see here to learn more about experiment management in SigOpt:

class Run(object):
  def __init__(self, run_type):
    ...
    self.run_type = run_type

  def execute(self, args):
    with sigopt.create_run(name=self.run_type) as run:
      # get data
      ...
      # register hyperparameter values with sigopt
      ...
      # log metadata about source of hyperparameter suggestion
      ...
      # instantiate, train, and evaluate model
      ...
      # log metric results to sigopt run
      ...

To create a customized random process for generating parameter configurations, SigOpt allows assigning user-generated values (randomly generated in this case) for each parameter. Using the API in this way allows you to run your random process for as many iterations as you wish during an Experiment, and then to continue the Experiment with other optimization approaches.

for _ in range(BUDGET):
  args = dict()
  args["parameter_0"] = numpy.random.randint(low=32, high=512)
  args["parameter_1"] = "class-0" if numpy.random.random() >= 0.75 else "class-1"
  random_run = Run(run_type="random search")
  random_run.execute(args)

Last updated