sigopt.xgboost.experiment
sigopt.xgboost.experiment
function simplifies the hyperparameter tuning process of an XGBoost model, by automatically creating and running a SigOpt optimization Experiment. This function also extends the automatic parameter, metric, and metadata logging of our sigopt.xgboost.run
API to the SigOpt experimentation platform.sigopt.xgboost.experiment
offers the following crucial improvements over the existing SigOpt Experiment API when tuning an XGBoost model:sigopt.xgboost.experiment
API, we provide multiple examples showcasing its simplicity and flexibility. Our API aims to reduce the overall complexity of intelligent experimentation and hyperparameter optimization by automatically selecting parameters, metrics, and even the budget where needed.sigopt.xgboost.experiment
experiment_config
dict
eval
params
dict
tree_method
, you plan on fixing throughout the course of the Experiment. See the description of these parameters at the XGBoost Parameters documentation for more information.num_boost_round
int
num_boost_round
is specified in parameters
field in the experiment_config
.early_stopping_rounds
int
early_stopping_rounds
. NOTE: SigOpt sets early_stopping_rounds
to 10 by default. To turn off early stopping, explicitly set it to None
.run_options
dict
experiment_config
is most important to understand, since it not only determines how your Experiment executes, but also possesses the most flexibility and extensibility out of all XGBoost Experiment API arguments. Thus, we explain it next.sigopt.xgboost.experiment
through experiment_config
name
string
parameters
metrics
budget
int
name
only: SigOpt autoselects the bounds and type.name
and type
: SigOpt autoselects the bounds.name
, type
, and bounds
/categorical_values
: Explicit parameter specification.bounds
or categorical_values
explicitly stated.metrics
argument of the experiment_config
and the datasets listed in the evals
argument.accuracy
, F1
, precision
, recall
accuracy
mean absolute error
, mean squared error
mean squared error