Bring Your Own Optimizer
At SigOpt, we have invested thousands of hours in developing a powerful optimization engine to efficiently conduct the hyperparameter search process. We understand, however, that some users prefer to bring their own strategies to the model development process — SigOpt fully supports this. In this page, and in the notebook linked below, we explain how you can use your own optimization tool and store the progress in SigOpt to inspect and visualize on our web dashboard.
Show me the code
View sample Bring Your Own Optimizer implementations in a notebook
See this notebook for a complete demonstration for Manual Search, Grid Search, Random Search, Optuna and Hyperopt. For more notebook instructions and tutorials, check out our GitHub notebook tutorials repo.
The content below on this page previews the visualizations generated by that notebook, as well as brief code snippets.
Goal: Visualize and store results from different optimization approaches
You will see how to track each iteration of multiple optimization loops - each using a different optimization approach - as a SigOpt run, and group all of the runs into a SigOpt project. You can then visualize the results of all the runs in the project together without writing any plotting code. Note that if you have custom plots in your code you can attach these to runs and store them on the SigOpt platform with any other metadata you decide is relevant to track as a run artifact - for more detail please see the runs API reference.
Logging run artifacts with SigOpt
In the notebook linked above you will find a class structured like the following pseudocode to instantiate data objects to manage training runs across optimization approaches. These training runs will be created during each iteration of the various optimization loops executed for each global optimization strategy we invoke. Please see here to learn more about experiment management in SigOpt:
To create a customized random process for generating parameter configurations, SigOpt allows assigning user-generated values (randomly generated in this case) for each parameter. Using the API in this way allows you to run your random process for as many iterations as you wish during an Experiment, and then to continue the Experiment with other optimization approaches.
Last updated