CLI Reference
SigOpt is a command-line tool for managing training clusters and running optimization experiments.
The cluster configuration file is commonly referred to as
cluster.yml
, but you can name yours anything you like. The file is used when we create a SigOpt cluster, with sigopt cluster create -f cluster.yml
. You can update your cluster configuration file after the cluster has been created to change the number of nodes in your cluster or change instance types. These changes can be applied by running sigopt cluster update -f cluster.yml
. Some updates might not be supported, for example introducing GPU nodes to your cluster in some regions. If the update is not supported then you will need to destroy the cluster and create it again.The available fields are:
Field | Required? | Description |
---|---|---|
cpu , or gpu | Yes | You must provide at least one of either cpu or gpu . Define the CPU compute that your cluster will need in terms of: instance_type , max_nodes , and min_nodes . It is recommended that you set min_nodes to 0 so the autoscaler can remove all of your expensive compute nodes when they aren't in use. It's ok if max_nodes and min_nodes are the same value, as long as max_nodes is not 0. |
cluster_name | Yes | You must provide a name for your cluster. You will share this with anyone else who wants to connect to your cluster. |
aws | No | Override environment-provided values for aws_access_key_id or aws_secret_access_key . |
kubernetes_version | No | The version of Kubernetes to use for your cluster. Currently supports Kubernetes 1.16, 1.17, 1.18, and 1.19. Defaults to the latest stable version supported by SigOpt, which is currently 1.18. |
provider | No | Currently, AWS is our only supported provider for creating clusters. You can, however, use a custom provider to connect to your own Kubernetes cluster with the sigopt cluster connect . See page on Bringing your own K8s cluster. |
system | No | System nodes are required to run the autoscaler. You can specify the number and type of system nodes with min_nodes , max_nodes and instance_type . The value of min_nodes must be at least 1 so that you have at least 1 system node. The defaults for system are:+ min_nodes : 1 + max_nodes : 2 + instance_type : "t3.large" |
The example YAML file below defines a CPU cluster named
tiny-cluster
with two t2.small
AWS instances.# cluster.yml
# AWS is currently our only supported provider for cluster create
# You can connect to custom clusters via `sigopt connect`
provider: aws
# We have provided a name that is short and descriptive
cluster_name: tiny-cluster
# Your cluster config can have CPU nodes, GPU nodes, or both.
# The configuration of your nodes is defined in the sections below.
# (Optional) Define CPU compute here
cpu:
# AWS instance type
instance_type: t2.small
max_nodes: 2
min_nodes: 0
# # (Optional) Define GPU compute here
# gpu:
# # AWS GPU-enabled instance type
# # This can be any p* instance type
# instance_type: p2.xlarge
# max_nodes: 2
# min_nodes: 0
kubernetes_version: '1.20'
The SigOpt configuration file tells SigOpt how to setup and run the model, which metrics to track, as well as details about which hyperparameters to tune.
You can use a SigOpt Run config YAML file you've already created, or SigOpt will auto-generate
run.yml
and cluster.yml
template files for you if you run the following:$ sigopt init
The available fields for
run.yml
are:Field | Required? | Description |
---|---|---|
image | Yes | Name of Docker container SigOpt creates for you. You can also point this to an existing Docker container to use for SigOpt. |
name | Yes | Name for your run |
aws | No | AWS access credentials to use with the Run. Will be used to access S3 during model execution |
resources | No | Resources to allocate to each Run. Can specify limits and requests for cpu, memory, ephemeral-storage and can specify GPUs. |
run | No | Model file to execute |
To orchestrate an optimization AI Experiment, you will also need to specify an
experiment.yml
file.The available fields are:
Field | Required? | Description |
---|---|---|
budget | Yes | Number of Runs for a SigOpt AI Experiment |
metrics | Yes | Evaluation and storage metrics for a SigOpt AI Experiment |
name | Yes | Name for your AI Experiment |
parameters | Yes | Parameters and ranges specified for a SigOpt AI Experiment |
type | Yes | Type of AI Experiment to execute: + offline — for SigOpt Optimization and All Constraint Experiments+ random — for Random Search+ + grid — for Grid Search |
parallel_bandwidth | No | Number of workers |
When specifying CPUs, valid amounts are whole numbers (1, 2), and fractional numbers or millis (1.5 and 1500m both represent 1.5 CPU). When specifying memory, valid amounts are shown in the Kubernetes documentation for memory resources, but some examples are 1e9, 1Gi, 500M. For GPUs, only whole numbers are valid.
When choosing the resources for a single model training run, it's important to keep in mind that some resources on your cluster will be auto-reserved for Kubernetes processes. For this reason, you must specify fewer resources for your model than are available on each node. A good rule of thumb is to assume that your node will have 0.5 CPU less than the total to run your model.
For example, if your nodes have 8 CPUs then you must specify fewer than 8 CPUs in the requests section of your
resources
in order for your model to run. Keep in mind that you can specify fractional amounts of CPU, e.g. 7.5 or 7500m.Here's an example of SigOpt Run and AI Experiment YAML files:
# run.yml
name: My Run
run: python mymodel.py
resources:
requests:
cpu: 0.5
memory: 512Mi
limits:
cpu: 2
memory: 4Gi
gpus: 1
image: my-run
# experiment.yml
name: SGD Classifier HPO
metrics:
- name: accuracy
parameters:
- name: l1_ratio
type: double
bounds:
min: 0
max: 1.0
- name: log_alpha
type: double
bounds:
min: -5
max: 2
parallel_bandwidth: 2
budget: 60
The best way to learn the most up to date information about cluster commands is from the command line interface (CLI) itself! Append any command with
--help
to learn about sub commands, arguments, and flags.For example, to learn more about all SigOpt commands, run:
$ sigopt --help
To learn more about the specific
sigopt cluster optimize
command, run:$ sigopt cluster optimize --help
Users creating AWS clusters with SigOpt can easily interface with different AWS services. To allow your cluster permission to access different AWS services, provide additional AWS policies in the
aws.additional_policies
section of the cluster configuration file.cluster_name: cluster-with-s3-access
provider: aws
cpu:
instance_type: t2.small
min_nodes: 0
max_nodes: 2
aws:
additional_policies:
- arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
SigOpt integrates seamlessly with the SigOpt API to optimize the hyperparameters of your model. SigOpt is built to handle communication with theSigOpt API under the hood, so that you only need to focus on your model, some lightweight installation requirements, and your experiment configuration file.
As you write your model, use a few lines of code from the
sigopt
package to read hyperparameters and write your model's metric(s).Below is a comparison of two nearly-identical
Multilayer Perceptron
models. The first example does not use SigOpt; the second example does use SigOpt. As you can see, the model with SigOpt uses sigopt.get_parameter
to read assignments from SigOpt, as well as sigopt.log_metric
to send its metric value back to SigOpt.Without SigOpt
With SigOpt
import numpy
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
x_train = numpy.random.random((1000, 20))
y_train = keras.utils.to_categorical(
numpy.random.randint(10, size=(1000, 1)),
num_classes=10,
)
x_test = numpy.random.random((100, 20))
y_test = keras.utils.to_categorical(
numpy.random.randint(10, size=(100, 1)),
num_classes=10,
)
dropout_rate = 0.5
model = Sequential()
model.add(Dense(
units=64,
activation='relu',
input_dim=20,
))
model.add(Dropout(dropout_rate))
model.add(Dense(
units=64,
activation='relu',
))
model.add(Dropout(dropout_rate))
model.add(Dense(10, activation='softmax'))
sgd = SGD(
lr=0.01,
decay=1e-6,
momentum=0.9,
nesterov=True,
)
model.compile(
loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'],
)
model.fit(
x=x_train,
y=y_train,
epochs=20,
batch_size=128,
)
evaluation_loss, accuracy = model.evaluate(
x=x_test,
y=y_test,
batch_size=128,
)
print('evaluation_loss:', evaluation_loss)
print('accuracy:', accuracy)
import numpy
from numpy import log
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
import sigopt
x_train = numpy.random.random((1000, 20))
y_train = keras.utils.to_categorical(
numpy.random.randint(10, size=(1000, 1)),
num_classes=10,
)
x_test = numpy.random.random((100, 20))
y_test = keras.utils.to_categorical(
numpy.random.randint(10, size=(100, 1)),
num_classes=10,
)
sigopt.params.setdefaults(
dropout_rate=0.5,
hidden_1=64,
activation_1="relu",
hidden_2=64,
activation_2="relu",
log_lr=log(0.01),
log_decay=-6,
momentum=0.9,
batch_size=128,
)
model = Sequential()
model.add(
Dense(
units=sigopt.params.hidden_1,
activation=sigopt.params.activation_1,
input_dim=20,
)
)
model.add(Dropout(sigopt.params.dropout_rate))
model.add(
Dense(
units=sigopt.params.hidden_2,
activation=sigopt.params.activation_2,
)
)
model.add(Dropout(dropout_rate))
model.add(Dense(10, activation="softmax"))
sgd = SGD(
lr=10 ** sigopt.params.log_lr,
decay=10 ** sigopt.params.log_decay,
momentum=sigopt.params.momentum,
nesterov=True,
)
model.compile(
loss="categorical_crossentropy",
optimizer=sgd,
metrics=["accuracy"],
)
model.fit(
x=x_train,
y=y_train,
epochs=20,
batch_size=128,
)
evaluation_loss, accuracy = model.evaluate(
x=x_test,
y=y_test,
batch_size=sigopt.params.batch_size,
)
sigopt.log_metric("evaluation_loss", evaluation_loss)
sigopt.log_metric("accuracy", accuracy)
If you're training a model that needs a GPU you will want to use
resources
to ensure that your model has access to GPUs. Requests and limits are optional, but may be helpful if your model is having trouble running with enough memory or CPU resources.Requests are resource guarantees and will cause your model to wait until the cluster has available resources before running. Limits prevent your model from using additional resources. These map directly to Kubernetes requests and limits.
Note: If you only set a limit it will also set a request of the same value. See the Kubernetes documentation for details.
- CPU resources are measured in number of "logical cores" and can be decimal values. This is generally a vCPU in the cloud and a hyperthread on a custom cluster. See Meaning of CPU on the Kubernetes documentation for cloud specific and formatting details.
- Memory is measured in number of bytes but can be postfixed by "Mi, Gi" for megabytes and gigabytes respectively. See Meaning of Memory on the Kubernetes documentation for details and below for a simple example.
- The gpus field is currently specific to Nvidia gpus tagged as "nvidia.com/gpu". Alternatives can used by adding them to the limits field.
The below example will guarantee 20%(.2) of a logical core, 200 megabytes of memory, and a gpu are available for your model to run. If the cluster you are running on does not have enough free compute resources it will wait until they become available before running your model. This example will also limit your model so that it does not use more than 2 logical cores and 2 gigabytes of memory.
name: My Experiment
run: python model.py
image: example/foobar
resources:
requests:
cpu: 0.5
memory: 512Mi
limits:
cpu: 2
memory: 2Gi
Orchestrate uses Docker to build and upload your model environment. If you find that
sigopt cluster optimize
is taking a long time, then you may want to try some of the following tips to reduce the build and upload time of your model:Omit files like logs, saved models, tests, and virtual environments. Changes to these extra files will cause SigOpt to re-build your model environment.
You can try downloading or streaming your training data in your run commands instead.
This file should contain a list of the files that you want to omit from your model environment.
# python bytecode
**/*.pyc
**/__pycache__/
# virtual environment
venv/
# training data
data/
# tests
tests/
# anything else
.git/
saved_models/
logs/
Clusters with the provider
aws
will use AWS ECR as their default container registry, and cluster with the provider custom
will use Docker Hub.To use a custom image registry, provide the
registry
argument when you connect to your cluster:$ sigopt cluster connect \
--cluster-name tiny-cluster \
--provider custom \
--kubeconfig /path/to/kubeconfig \
--registry myregistrydomain:port
Last modified 1yr ago