AI Experiment and Optimization Tutorial
We'll walk through an example of instrumenting a model in order to run a model parameter optimization with SigOpt. In this tutorial, you will learn how to:
- Install the SigOpt Python client
- Set your SigOpt API token
- Set the project
- Instrument your model
- Configure your AI Experiment
- Create your first Experiment and optimize your model metric with SigOpt
- Visualize your Experiment results
$ python --version
$ pip --version
For notebook instructions and tutorials, check out our GitHub notebook tutorials repo, open the SigOpt AI Experiment notebook tutorial in Google Colab.
Install the SigOpt Python package and the libraries required to run the model used for this tutorial.
# Install sigopt
$ pip install sigopt
# Confirm that sigopt >= 8.0.0 is installed
$ sigopt version
# Install XGBoost and scikit-learn. We have tested the sample model used in this tutorial with xgboost==1.5.2, and scikit-learn==1.0.2
$ pip install xgboost scikit-learn
Once you've installed SigOpt, you need to get your API token in order to use the SigOpt API and later explore your Runs and AI Experiments in the SigOpt app. To find your API token, go directly to the API Token page.
# Set sigopt basic configuration. You will be asked to fill in your API token,
# and whether you want SigOpt to collect your model logs and track your model code
$ sigopt config
Runs are created within projects. The project allows you to sort and filter through your Runs and AI Experiments and view useful charts to gain insights into everything you've tried.
# Set the environment variable to the SigOpt project where your Run will be saved.
$ export SIGOPT_PROJECT=my_first_project
The code below is a sample model instrumented with SigOpt where we highlight how to use SigOpt methods to log and track key model information.
Save the lines below in a script called
model.py
.model.py
import sklearn.datasets
import sklearn.metrics
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
import sigopt
# Data preparation required to run and evaluate the sample model
X, y = sklearn.datasets.load_iris(return_X_y=True)
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.33)
# Track the name of the dataset used for your Run
sigopt.log_dataset("iris 2/3 training, full test")
# Set n_estimators as the hyperparameter to explore for your AI Experiment
sigopt.params.setdefault("n_estimators", 100)
# Track the name of the model used for your Run
sigopt.log_model("xgboost")
# Instantiate and train your sample model
model = XGBClassifier(
n_estimators=sigopt.params.n_estimators,
use_label_encoder=False,
eval_metric="logloss",
)
model.fit(Xtrain, ytrain)
pred = model.predict(Xtest)
# Track the metric value and metric name for each Run
sigopt.log_metric("accuracy", sklearn.metrics.accuracy_score(pred, ytest))
AI Experiments are created in folders, your AI Experiment will automatically be created in the folder you set in Step 3. The experiment definition also includes a name, parameters (variables that SigOpt will suggest) and metrics. You can also set other options that you would like to run your AI Experiment with.
The names of the parameters are expected to match the names of the properties/attributes on
sigopt.params
. Similarly the metrics should match the names of the metrics passed to sigopt.log_metric
calls. The budget defines how many Runs you will create for your AI Experiment.A SigOpt AI Experiment can be configured using a YAML configuration file. Save the lines below in a YAML file called
experiment.yml
.experiment.yml
name: XGBoost Optimization
metrics:
- name: accuracy
strategy: optimize
objective: maximize
parameters:
- name: n_estimators
bounds:
min: 10
max: 100
type: int
budget: 10
Run the following command to start an experiment using the model from Step 4 and the experiment file from Step 5.
$ sigopt optimize -e experiment.yml python model.py
SigOpt will conveniently output links to the AI Experiment and Runs pages on our web application.
Open the AI Experiment link to view your AI Experiment in our web application. Here's a view of the Experiment page once the Experiment is completed.

From the Experiment page, open the History tab to see the list of Runs for your AI Experiment. Click on any individual run ID link to view any completed Run. Here's a view of a Run page:

In this tutorial, we covered the recommended way to instrument and optimize your model, and visualize your results with SigOpt. You learned that experiments are collections of runs that search through a defined parameter space to satisfy the experiment search criteria.
Check out our tutorial, Runs Tutorial, for a closer look at a single Run, and see how to track one-off runs without creating an experiment.
Last modified 10mo ago