Dockerfile: Define Your Environment
SigOpt needs to know how to set up your model's environment in order to run your model. SigOpt will create a Docker container with your specified environment requirements. You can read more about the Dockerfile in Docker's official docs.
You can use a Dockerfile you've already created, or SigOpt will auto-generate a Dockerfile template for you if you run the following:
1
$ sigopt init
Copied!
Example Dockerfile:
1
FROM python:3.9
2
3
RUN pip install --no-cache-dir --user sigopt
4
5
COPY requirements.txt /orchestrate/requirements.txt
6
RUN pip install --no-cache-dir --user -r /orchestrate/requirements.txt
7
8
COPY . /orchestrate
9
WORKDIR /orchestrate
Copied!

Enabling GPU access

To enable GPU access for your workflows, you will have to specify your CUDA and modeling framework installation in your Dockerfile. This page has Dockerfiles for CUDA versions that you can modify as needed.
Here's an example Dockerfile (adapted from this original) for enabling GPU access when running SigOpt — it uses CUDA 11.1.1 with Tensorflow 2.4.1:
1
FROM nvidia/cuda:11.1.1-cudnn8-runtime
2
3
USER root
4
5
RUN set -ex \
6
; apt-get update -yqq \
7
; apt-get install -yqq git python3 python3-pip \
8
; rm -rf /var/lib/apt/lists/* \
9
; :
10
11
RUN pip3 install --no-cache-dir --upgrade pip
12
RUN pip3 install --no-cache-dir tensorflow-gpu==2.4.1 numpy
13
RUN pip3 install --no-cache-dir sigopt
14
15
RUN ln -s /usr/local/cuda/lib64/libcusolver.so.11 /usr/local/cuda/lib64/libcusolver.so.10
16
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/lib64
17
18
COPY venv_requirements.txt /orchestrate/venv_requirements.txt
19
RUN pip3 install -r /orchestrate/venv_requirements.txt
20
RUN useradd orchestrate
21
22
USER orchestrate
23
COPY . /orchestrate
24
WORKDIR /orchestrate
Copied!
Copy link