Dockerfile: Define Your Environment

SigOpt needs to know how to set up your model's environment in order to run your model. SigOpt will create a Docker container with your specified environment requirements. You can read more about the Dockerfile in Docker's official docs.

You can use a Dockerfile you've already created, or SigOpt will auto-generate a Dockerfile template for you if you run the following:

$ sigopt init

Example Dockerfile:

FROM python:3.9

RUN pip install --no-cache-dir --user sigopt

COPY requirements.txt /orchestrate/requirements.txt
RUN pip install --no-cache-dir --user -r /orchestrate/requirements.txt

COPY . /orchestrate
WORKDIR /orchestrate

Enabling GPU access

To enable GPU access for your workflows, you will have to specify your CUDA and modeling framework installation in your Dockerfile. This page has Dockerfiles for CUDA versions that you can modify as needed.

Here's an example Dockerfile (adapted from this original) for enabling GPU access when running SigOpt — it uses CUDA 11.1.1 with Tensorflow 2.4.1:

FROM nvidia/cuda:11.1.1-cudnn8-runtime

USER root

RUN set -ex \
; apt-get update -yqq \
; apt-get install -yqq git python3 python3-pip \
; rm -rf /var/lib/apt/lists/* \
; :

RUN pip3 install --no-cache-dir --upgrade pip
RUN pip3 install --no-cache-dir tensorflow-gpu==2.4.1 numpy
RUN pip3 install --no-cache-dir sigopt

RUN ln -s /usr/local/cuda/lib64/libcusolver.so.11 /usr/local/cuda/lib64/libcusolver.so.10
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/lib64

COPY venv_requirements.txt /orchestrate/venv_requirements.txt
RUN pip3 install -r /orchestrate/venv_requirements.txt
RUN useradd orchestrate

USER orchestrate
COPY . /orchestrate
WORKDIR /orchestrate

Last updated