Tuning XGBoost Models

sigopt.xgboost.experiment

The sigopt.xgboost.experiment function simplifies the hyperparameter tuning process of an XGBoost model, by automatically creating and running a SigOpt AI Experiment. This function also extends the automatic parameter, metric, and metadata logging of our sigopt.xgboost.run API to the SigOpt experimentation platform.

However, this automatic logging is only one of the features, andsigopt.xgboost.experiment offers the following crucial improvements over the existing SigOpt AI Experiment API when tuning an XGBoost model:

  • A simplified and streamlined API that knows the exact problem it is tuning: XGBoost, and makes intelligent decisions accordingly.

  • Automatic selection of the parameter search space, optimization metric, and the tuning budget.

  • A preset list of standard optimization metrics to choose from.

  • An improved hyperparameter optimization routine that leverages advanced methods in metalearning and multi-fidelity optimization to learn a more performant model in less time.

This API has been designed with ease-of-use in mind, so that you may run an XGBoost Experiment as effortlessly as possible.

Examples

To give you an initial feel for how you might use the sigopt.xgboost.experiment API, we provide multiple examples showcasing its simplicity and flexibility. Our API aims to reduce the overall complexity of intelligent experimentation and hyperparameter optimization by automatically selecting parameters, metrics, and even the budget where needed.

The sequence of examples are provided in the tabs below, increases in complexity.

Automatic Experiment Configuration

The parameter search space, metric, and budget are determined by SigOpt based on the training data provided.

from sklearn import datasets
from sklearn.model_selection import train_test_split
import xgboost
import sigopt.xgboost

bc = datasets.load_breast_cancer()
(Xtrain, Xtest, ytrain, ytest) = train_test_split(bc.data, bc.target, test_size=0.5, random_state=42)
dtrain = xgboost.DMatrix(data=Xtrain, label=ytrain, feature_names=bc.feature_names)
dtest = xgboost.DMatrix(data=Xtest, label=ytest, feature_names=bc.feature_names)

my_config = dict(
 name="My XGBoost Experiment", # Let SigOpt set the tuning parameters, metrics, and budget.
)
experiment = sigopt.xgboost.experiment(
 experiment_config=my_config,
 dtrain=dtrain,
 evals=[(dtest, "val_set")],
 params = {"objective": "binary:logistic"}, # XGB parameters to be fixed for all runs
)

The simple examples are made possible because of key research advances made by SigOpt research. It is worth noting that the decrease in simplicity corresponds to a decrease in flexibility; if you opt to omit the metric for example, you will be forced to optimize the metric we select for you.

Input Arguments for sigopt.xgboost.experiment

The API for an XGBoost Experiment follows:

The experiment_config is most important to understand, since it not only determines how your Experiment executes, but also possesses the most flexibility and extensibility out of all XGBoost Experiment API arguments. Thus, we explain it next.

Specifying sigopt.xgboost.experiment through experiment_config

An experiment config has the following keys:

Parameter Space

We show an illustrative example below on how to set the parameter space.

sigopt.xgboost.experiment(...,
  parameters = [
    {"name": "eta"}, # name only
    {"name": "num_boost_round", "bounds": {"min": 10, "max": 100}}, # name and bounds only
    {"name": "tree_method", "type": "categorical", "categorical_values": ["exact", "hist"]},
  ]
)

There are three different ways of specifying the Experiment parameters:

  • name only: SigOpt autoselects the bounds and type.

  • name and type: SigOpt autoselects the bounds.

  • name, type, and bounds/categorical_values: Explicit parameter specification.

These specifications may be mixed as in the example above. Currently, SigOpt only autoselects the bounds for the following parameters:

eta, max_delta_step, alpha, gamma, lambda, 
max_depth, min_child_weight, num_boost_round, 
colsample_bylevel, colsample_bynode, colsample_bytree

Any parameter that is not on this list must have its bounds or categorical_values explicitly stated.

Metric Space

The metric space of an Experiment is defined by both the metrics argument of the experiment_config and the datasets listed in the evals argument.

There are two ways of specifying the metric space.

# Option 1: Fully specified like a SigOpt Experiment
metrics = [{"name": "accuracy", "strategy": "optimize", "objective": "maximize"}]

# Option 2: Using only a String
metrics = "F1"

Below is a table of the metrics we natively support for classification and regression.

Last updated