Experiment and Optimization Tutorial
We'll walk through an example of instrumenting a model in order to run a model parameter optimization with SigOpt. In this tutorial, you will learn how to:
  • Install the SigOpt Python client
  • Set your SigOpt API token
  • Set the project
  • Instrument your model
  • Configure your Experiment
  • Create your first Experiment and optimize your model metric with SigOpt
  • Visualize your Experiment results
Before starting, make sure you have Python 3.6+ and pip installed on your environment.
$ python --version
$ pip --version

View this tutorial in a notebook

For notebook instructions and tutorials, check out our GitHub notebook tutorials repo, open the SigOpt Experiment notebook tutorial in Google Colab.

Step 1 - Install SigOpt Python Client

Install the SigOpt Python package and the libraries required to run the model used for this tutorial.
# Install sigopt
$ pip install sigopt
# Confirm that sigopt >= 8.0.0 is installed
$ sigopt version
# Install XGBoost and scikit-learn. We have tested the sample model used in this tutorial with xgboost==1.5.2, and scikit-learn==1.0.2
$ pip install xgboost scikit-learn

Step 2 - Set Your API Token

Once you've installed SigOpt, you need to get your API token in order to use the SigOpt API and later explore your Runs and Experiments in the SigOpt app. To find your API token, go directly to the API Token page.
If you do not have an account, sign up for a free account and get started with SigOpt today.
# Set sigopt basic configuration. You will be asked to fill in your API token,
# and whether you want SigOpt to collect your model logs and track your model code
$ sigopt config

Step 3 - Set Project

Runs are created within projects. The project allows you to sort and filter through your Runs and Experiments and view useful charts to gain insights into everything you've tried.
# Set the environment variable to the SigOpt project where your Run will be saved.
$ export SIGOPT_PROJECT=my_first_project

Step 4 - Instrument Your Model

The code below is a sample model instrumented with SigOpt where we highlight how to use SigOpt methods to log and track key model information.
Save the lines below in a script called model.py.
# model.py
import sklearn.datasets
import sklearn.metrics
from xgboost import XGBClassifier
import sigopt
# Data preparation required to run and evaluate the sample model
X, y = sklearn.datasets.load_iris(return_X_y=True)
Xtrain, ytrain = X[:100], y[:100]
# Track the name of the dataset used for your Run
sigopt.log_dataset('iris 2/3 training, full test')
# Set n_estimators as the hyperparameter to explore for your Experiment
sigopt.params.setdefault("n_estimators", 100)
# Track the name of the model used for your Run
# Instantiate and train your sample model
model = XGBClassifier(
model.fit(Xtrain, ytrain)
pred = model.predict(X)
# Track the metric value and metric name for each Run
sigopt.log_metric("accuracy", sklearn.metrics.accuracy_score(pred, y))

Step 5 - Define Your Experiment Configuration

Experiments are created in folders, your Experiment will automatically be created in the folder you set in Step 3. The experiment definition also includes a name, parameters (variables that SigOpt will suggest) and metrics. You can also set other options that you would like to run your Experiment with.
The names of the parameters are expected to match the names of the properties/attributes on sigopt.params. Similarly the metrics should match the names of the metrics passed to sigopt.log_metric calls. The budget defines how many Runs you will create for your experiment.
A SigOpt Experiment can be configured using a YAML configuration file. Save the lines below in a YAML file called experiment.yml.
# experiment.yml
name: XGBoost Optimization
- name: accuracy
strategy: optimize
objective: maximize
- name: n_estimators
min: 10
max: 100
type: int
budget: 10

Step 6 - Run the Code

Run the following command to start an experiment using the model from Step 4 and the experiment file from Step 5.
$ sigopt optimize -e experiment.yml python model.py
SigOpt will conveniently output links to the Experiment and Runs pages on our web application.

Step 7 - Visualize Your Experiment Results

Open the Experiment link to view your Experiment in our web application. Here's a view of the Experiment page once the Experiment is completed.
From the Experiment page, open the History tab to see the list of Runs for your Experiment. Click on any individual run ID link to view any completed Run. Here's a view of a Run page:


In this tutorial, we covered the recommended way to instrument and optimize your model, and visualize your results with SigOpt. You learned that experiments are collections of runs that search through a defined parameter space to satisfy the experiment search criteria.
Check out our tutorial, Runs Tutorial, for a closer look at a single Run, and see how to track one-off runs without creating an experiment.