Tuning XGBoost Models

sigopt.xgboost.experiment

The sigopt.xgboost.experiment function simplifies the hyperparameter tuning process of an XGBoost model, by automatically creating and running a SigOpt optimization Experiment. This function also extends the automatic parameter, metric, and metadata logging of our sigopt.xgboost.run API to the SigOpt experimentation platform.
However, this automatic logging is only one of the features, andsigopt.xgboost.experiment offers the following crucial improvements over the existing SigOpt Experiment API when tuning an XGBoost model:
  • A simplified and streamlined API that knows the exact problem it is tuning: XGBoost, and makes intelligent decisions accordingly.
  • Automatic selection of the parameter search space, optimization metric, and the tuning budget.
  • A preset list of standard optimization metrics to choose from.
  • An improved hyperparameter optimization routine that leverages advanced methods in metalearning and multi-fidelity optimization to learn a more performant model in less time.
This API has been designed with ease-of-use in mind, so that you may run an XGBoost Experiment as effortlessly as possible.

Examples

To give you an initial feel for how you might use the sigopt.xgboost.experiment API, we provide multiple examples showcasing its simplicity and flexibility. Our API aims to reduce the overall complexity of intelligent experimentation and hyperparameter optimization by automatically selecting parameters, metrics, and even the budget where needed.
The sequence of examples are provided in the tabs below, increases in complexity.
Simplest
Simple
Complex
More Complex
Automatic Experiment Configuration
The parameter search space, metric, and budget are determined by SigOpt based on the training data provided.
1
from sklearn import datasets
2
from sklearn.model_selection import train_test_split
3
import xgboost
4
import sigopt.xgboost
5
6
bc = datasets.load_breast_cancer()
7
(Xtrain, Xtest, ytrain, ytest) = train_test_split(bc.data, bc.target, test_size=0.5, random_state=42)
8
dtrain = xgboost.DMatrix(data=Xtrain, label=ytrain, feature_names=bc.feature_names)
9
dtest = xgboost.DMatrix(data=Xtest, label=ytest, feature_names=bc.feature_names)
10
11
my_config = dict(
12
name="My XGBoost Experiment", # Let SigOpt set the tuning parameters, metrics, and budget.
13
)
14
experiment = sigopt.xgboost.experiment(
15
experiment_config=my_config,
16
dtrain=dtrain,
17
evals=[(dtest, "val_set")],
18
params = {"objective": "binary:logistic"}, # XGB parameters to be fixed for all runs
19
)
Copied!
Auto Parameter and Metric Selection
The parameter search bounds are determined by SigOpt based on the training data provided. Select the optimized metric with a single string.
1
from sklearn import datasets
2
from sklearn.model_selection import train_test_split
3
import xgboost
4
import sigopt.xgboost
5
6
bc = datasets.load_breast_cancer()
7
(Xtrain, Xtest, ytrain, ytest) = train_test_split(bc.data, bc.target, test_size=0.5, random_state=42)
8
dtrain = xgboost.DMatrix(data=Xtrain, label=ytrain, feature_names=bc.feature_names)
9
dtest = xgboost.DMatrix(data=Xtest, label=ytest, feature_names=bc.feature_names)
10
11
my_config = dict(
12
name="My XGBoost Experiment",
13
parameters = [
14
dict(name="max_depth"), # Let SigOpt set the appropriate bounds
15
dict(name="eta"),
16
],
17
metrics="accuracy", # Only use the metric name
18
budget=20
19
)
20
experiment = sigopt.xgboost.experiment(
21
experiment_config=my_config,
22
dtrain=dtrain,
23
evals=[(dtest, "val_set")],
24
params = {"objective": "binary:logistic"}, # XGB parameters to be fixed for all runs
25
)
Copied!
Auto Parameter Selection
The parameter search bounds are determined by SigOpt based on the training data provided. The optimized metric is fully specified with a dict.Auto Parameter Selection
The parameter search bounds are determined by SigOpt based on the training data provided. The optimized metric is fully specified with a dict.
1
from sklearn import datasets
2
from sklearn.model_selection import train_test_split
3
import xgboost
4
import sigopt.xgboost
5
6
bc = datasets.load_breast_cancer()
7
(Xtrain, Xtest, ytrain, ytest) = train_test_split(bc.data, bc.target, test_size=0.5, random_state=42)
8
dtrain = xgboost.DMatrix(data=Xtrain, label=ytrain, feature_names=bc.feature_names)
9
dtest = xgboost.DMatrix(data=Xtest, label=ytest, feature_names=bc.feature_names)
10
11
my_config = dict(
12
name="My XGBoost Experiment",
13
parameters = [
14
dict(name="max_depth"), # Let SigOpt set the appropriate bounds
15
dict(name="eta"),
16
],
17
metrics=[
18
dict(name="accuracy", strategy="optimize", objective="maximize"),
19
],
20
budget=20
21
)
22
experiment = sigopt.xgboost.experiment(
23
experiment_config=my_config,
24
dtrain=dtrain,
25
evals=[(dtest, "val_set")],
26
params = {"objective": "binary:logistic"}, # XGB parameters to be fixed for all runs
27
)
Copied!
Full Experiment Configuration
A fully enumerated experiment config, which includes a full parameter search space with type and bounds, a full metric list that asks SigOpt to maximize classification accuracy, and an explicit budget.
1
from sklearn import datasets
2
from sklearn.model_selection import train_test_split
3
import xgboost
4
import sigopt.xgboost
5
6
bc = datasets.load_breast_cancer()
7
(Xtrain, Xtest, ytrain, ytest) = train_test_split(bc.data, bc.target, test_size=0.5, random_state=42)
8
dtrain = xgboost.DMatrix(data=Xtrain, label=ytrain, feature_names=bc.feature_names)
9
dtest = xgboost.DMatrix(data=Xtest, label=ytest, feature_names=bc.feature_names)
10
11
my_config = dict(
12
name="My XGBoost Experiment",
13
parameters = [
14
dict(name="max_depth", type="int", bounds=dict(min=3, max=12)),
15
dict(name="eta", type="double", bounds=dict(min=.05, max =.5), transformation="log"),
16
],
17
metrics=[
18
dict(name="accuracy", strategy="optimize", objective="maximize"),
19
],
20
budget=20
21
)
22
experiment = sigopt.xgboost.experiment(
23
experiment_config=my_config,
24
dtrain=dtrain,
25
evals=[(dtest, "val_set")],
26
params = {"objective": "binary:logistic"}, # XGB parameters to be fixed for all runs
27
)
Copied!
The simple examples are made possible because of key research advances made by SigOpt research. It is worth noting that the decrease in simplicity corresponds to a decrease in flexibility; if you opt to omit the metric for example, you will be forced to optimize the metric we select for you.

Input Arguments for sigopt.xgboost.experiment

The API for an XGBoost Experiment follows:
Argument
Type
Description
experiment_config
dict
The configuration of the Experiment. See the following section for more information.
dtrain
The training dataset.
eval
xgboost.DMatrix or array<(xgboost.DMatrix, string)>
These are the validation set(s). If it is a list, the first dataset will be used to compute the optimization metric.
params
dict
These are the XGBoost parameters, e.g., tree_method, you plan on fixing throughout the course of the Experiment. See the description of these parameters at the XGBoost Parameters documentation for more information.
num_boost_round
int
Optional. The number of boosting rounds. Leave this argument as blank if num_boost_round is specified in parameters field in the experiment_config.
early_stopping_rounds
int
Optional. XGBoost stops training when the validation metric has not improved for early_stopping_rounds. NOTE: SigOpt sets early_stopping_rounds to 10 by default. To turn off early stopping, explicitly set it to None.
run_options
dict
Optional. A dictionary specifying the autologging capabilities. See the SigOpt XGBoost Run documenation for more information.
The experiment_config is most important to understand, since it not only determines how your Experiment executes, but also possesses the most flexibility and extensibility out of all XGBoost Experiment API arguments. Thus, we explain it next.

Specifying sigopt.xgboost.experiment through experiment_config

An experiment config has the following keys:
Key
Type
Value
name
string
Name of the Experiment.
parameters
array<Parameter>
Optional. An array of Parameter objects. See Parameter Space subsection below for more information.
metrics
array<Metric> or string
Optional. An array of Metric objects. See Metric Space subsection below for more information.
budget
int
Optional. An integer defining the minimum number of SigOpt Runs in a given SigOpt Experiment.

Parameter Space

We show an illustrative example below on how to set the parameter space.
1
sigopt.xgboost.experiment(...,
2
parameters = [
3
{"name": "eta"}, # name only
4
{"name": "num_boost_round", "bounds": {"min": 10, "max": 100}}, # name and bounds only
5
{"name": "tree_method", "type": "categorical", "categorical_values": ["exact", "hist"]},
6
]
7
)
Copied!
There are three different ways of specifying the Experiment parameters:
  • name only: SigOpt autoselects the bounds and type.
  • name and type: SigOpt autoselects the bounds.
  • name, type, and bounds/categorical_values: Explicit parameter specification.
These specifications may be mixed as in the example above. Currently, SigOpt only autoselects the bounds for the following parameters:
1
eta, max_delta_step, alpha, gamma, lambda, max_depth, min_child_weight, num_boost_round, colsample_bylevel, colsample_bynode, colsample_bytree
Copied!
Any parameter that is not on this list must have its bounds or categorical_values explicitly stated.

Metric Space

The metric space of an Experiment is defined by both the metrics argument of the experiment_config and the datasets listed in the evals argument.
There are two ways of specifying the metric space.
1
# Option 1: Fully specified like a SigOpt Experiment
2
metrics = [{"name": "accuracy", "strategy": "optimize", "objective": "maximize"}]
3
4
# Option 2: Using only a String
5
metrics = "F1"
Copied!
Below is a table of the metrics we natively support for classification and regression.
Task
Options
Default
Classification
accuracy, F1, precision, recall
accuracy
Regression
mean absolute error, mean squared error
mean squared error