Quick Start Tutorials
Last updated
Last updated
SigOpt enables you to track and organize your modeling experimentation, trace your decisioning, and reproduce your experimentation. With interactive visuals you can compare training curves, metrics, and models quickly. This in turn helps you understand model performance, inform your intuition and explain results to your colleagues.
A SigOpt Run stores a model’s attributes, training checkpoints, and evaluated metrics, so that modelers can see a history of their work. This is the fundamental building block of the SigOpt AI module.
Runs record everything you might need to understand how a model was built, reconstitute the model in the future, or explain the process to a colleague.
For a complete list of attributes see the API Reference or go to the SigOpt Runs docs.
SigOpt Runs can be recorded by integrating code snippets into Python that you run in a notebook or via the command line.
A SigOpt AI Experiment is an automated search of your model's hyperparameter space. A SigOpt AI Experiment works as follows:
You start out by defining the hyperparameter and metric space. From there you use the SigOpt API to request SigOpt Runs, where the SigOpt Optimizer intelligently suggests hyperparameter configurations for you to try out in order to come up with the best configurations for your model.
SigOpt AI Experiments support the following types of optimization:
SigOpt Search: use our proprietary and word-class Bayesian Optimizer to search your parameter space and find the most performant parameter values
All Constraint Search: leverage our Bayesian Optimizer to emphasize exploring parameter space, by defining metrics as constraints (guardrails) instead of optimization objectives.
Grid Search: execute grid search with SigOpt
Random Search: execute random search with SigOpt
Bring your own optimizer and track with SigOpt: use your preferred optimizer or test alternatives and track it all consistently to visualize in SigOpt’s web dashboard.
For a complete list of functionality see the API Reference or go to the SigOpt AI Experiment docs.
SigOpt AI Experiments can be recorded by integrating code snippets into Python that you run in a notebook or via the command line.