CLI Reference
SigOpt is a command-line tool for managing training clusters and running optimization experiments.
Cluster Configuration File
The cluster configuration file is commonly referred to as cluster.yml
, but you can name yours anything you like. The file is used when we create a SigOpt cluster, with sigopt cluster create -f cluster.yml
. You can update your cluster configuration file after the cluster has been created to change the number of nodes in your cluster or change instance types. These changes can be applied by running sigopt cluster update -f cluster.yml
. Some updates might not be supported, for example introducing GPU nodes to your cluster in some regions. If the update is not supported then you will need to destroy the cluster and create it again.
The available fields are:
cpu
, or gpu
Yes
You must provide at least one of either cpu
or gpu
. Define the CPU compute that your cluster will need in terms of: instance_type
, max_nodes
, and min_nodes
. It is recommended that you set min_nodes
to 0 so the autoscaler can remove all of your expensive compute nodes when they aren't in use. It's ok if max_nodes
and min_nodes
are the same value, as long as max_nodes
is not 0.
cluster_name
Yes
You must provide a name for your cluster. You will share this with anyone else who wants to connect to your cluster.
aws
No
Override environment-provided values for aws_access_key_id
or aws_secret_access_key
.
kubernetes_version
No
The version of Kubernetes to use for your cluster. Currently supports Kubernetes 1.16, 1.17, 1.18, and 1.19. Defaults to the latest stable version supported by SigOpt, which is currently 1.18.
provider
No
system
No
System nodes are required to run the autoscaler. You can specify the number and type of system nodes with min_nodes
, max_nodes
and instance_type
. The value of min_nodes
must be at least 1 so that you have at least 1 system node. The defaults for system
are:
+ min_nodes
: 1
+ max_nodes
: 2
+ instance_type
: "t3.large"
Example
The example YAML file below defines a CPU cluster named tiny-cluster
with two t2.small
AWS instances.
Configure training orchestration
The SigOpt configuration file tells SigOpt how to setup and run the model, which metrics to track, as well as details about which hyperparameters to tune.
You can use a SigOpt Run config YAML file you've already created, or SigOpt will auto-generate run.yml
and cluster.yml
template files for you if you run the following:
The available fields for run.yml
are:
image
Yes
Name of Docker container SigOpt creates for you. You can also point this to an existing Docker container to use for SigOpt.
name
Yes
Name for your run
aws
No
AWS access credentials to use with the Run. Will be used to access S3 during model execution
resources
No
Resources to allocate to each Run. Can specify limits and requests for cpu, memory, ephemeral-storage and can specify GPUs.
run
No
Model file to execute
To orchestrate an optimization AI Experiment, you will also need to specify an experiment.yml
file.
The available fields are:
budget
Yes
Number of Runs for a SigOpt AI Experiment
metrics
Yes
Evaluation and storage metrics for a SigOpt AI Experiment
name
Yes
Name for your AI Experiment
parameters
Yes
Parameters and ranges specified for a SigOpt AI Experiment
type
Yes
Type of AI Experiment to execute:
+ offline
— for SigOpt Optimization and All Constraint Experiments
+ random
— for Random Search
+
+ grid
— for Grid Search
parallel_bandwidth
No
Number of workers
Considerations for resources
resources
When specifying CPUs, valid amounts are whole numbers (1, 2), and fractional numbers or millis (1.5 and 1500m both represent 1.5 CPU). When specifying memory, valid amounts are shown in the Kubernetes documentation for memory resources, but some examples are 1e9, 1Gi, 500M. For GPUs, only whole numbers are valid.
When choosing the resources for a single model training run, it's important to keep in mind that some resources on your cluster will be auto-reserved for Kubernetes processes. For this reason, you must specify fewer resources for your model than are available on each node. A good rule of thumb is to assume that your node will have 0.5 CPU less than the total to run your model.
For example, if your nodes have 8 CPUs then you must specify fewer than 8 CPUs in the requests section of your resources
in order for your model to run. Keep in mind that you can specify fractional amounts of CPU, e.g. 7.5 or 7500m.
Example
Here's an example of SigOpt Run and AI Experiment YAML files:
SigOpt Commands
The best way to learn the most up to date information about cluster commands is from the command line interface (CLI) itself! Append any command with --help
to learn about sub commands, arguments, and flags.
For example, to learn more about all SigOpt commands, run:
To learn more about the specific sigopt cluster optimize
command, run:
For a cheat sheet of all SigOpt CLI commands go to our API Reference.
Adding AWS Policies
Users creating AWS clusters with SigOpt can easily interface with different AWS services. To allow your cluster permission to access different AWS services, provide additional AWS policies in the aws.additional_policies
section of the cluster configuration file.
SigOpt Logging
SigOpt integrates seamlessly with the SigOpt API to optimize the hyperparameters of your model. SigOpt is built to handle communication with theSigOpt API under the hood, so that you only need to focus on your model, some lightweight installation requirements, and your experiment configuration file.
As you write your model, use a few lines of code from the sigopt
package to read hyperparameters and write your model's metric(s).
Logging Example
Below is a comparison of two nearly-identical Multilayer Perceptron
models. The first example does not use SigOpt; the second example does use SigOpt. As you can see, the model with SigOpt uses sigopt.get_parameter
to read assignments from SigOpt, as well as sigopt.log_metric
to send its metric value back to SigOpt.
SigOpt Compute Resources
If you're training a model that needs a GPU you will want to use resources
to ensure that your model has access to GPUs. Requests and limits are optional, but may be helpful if your model is having trouble running with enough memory or CPU resources.
Requests are resource guarantees and will cause your model to wait until the cluster has available resources before running. Limits prevent your model from using additional resources. These map directly to Kubernetes requests and limits.
Note: If you only set a limit it will also set a request of the same value. See the Kubernetes documentation for details.
Resource Types
CPU resources are measured in number of "logical cores" and can be decimal values. This is generally a vCPU in the cloud and a hyperthread on a custom cluster. See Meaning of CPU on the Kubernetes documentation for cloud specific and formatting details.
Memory is measured in number of bytes but can be postfixed by "Mi, Gi" for megabytes and gigabytes respectively. See Meaning of Memory on the Kubernetes documentation for details and below for a simple example.
The gpus field is currently specific to Nvidia gpus tagged as "nvidia.com/gpu". Alternatives can used by adding them to the limits field.
The below example will guarantee 20%(.2) of a logical core, 200 megabytes of memory, and a gpu are available for your model to run. If the cluster you are running on does not have enough free compute resources it will wait until they become available before running your model. This example will also limit your model so that it does not use more than 2 logical cores and 2 gigabytes of memory.
Docker
Orchestrate uses Docker to build and upload your model environment. If you find that sigopt cluster optimize
is taking a long time, then you may want to try some of the following tips to reduce the build and upload time of your model:
Keep your model directory free of extra files
Omit files like logs, saved models, tests, and virtual environments. Changes to these extra files will cause SigOpt to re-build your model environment.
Omit your training data from your model directory
You can try downloading or streaming your training data in your run commands instead.
Create a .dockerignore
file in your model directory
.dockerignore
file in your model directoryThis file should contain a list of the files that you want to omit from your model environment.
See the official Docker documentation for more information.
Custom Image Registries
Clusters with the provider aws
will use AWS ECR as their default container registry, and cluster with the provider custom
will use Docker Hub.
To use a custom image registry, provide the registry
argument when you connect to your cluster:
Last updated