SigOpt enables you to track and organize your modeling experimentation, trace your decisioning, and reproduce your experimentation. With interactive visuals you can compare training curves, metrics, and models quickly. This in turn helps you understand model performance, inform your intuition and explain results to your colleagues. Learn more about SigOpt’s Experiment Management here.
A SigOpt Run stores a model’s attributes, training checkpoints, and evaluated metrics, so that modelers can see a history of their work. This is the fundamental building block of SigOpt.
Runs record everything you might need to understand how a model was built, reconstitute the model in the future, or explain the process to a colleague.
For a complete list of attributes see the API Reference or go to the SigOpt Runs docs.