Prior Beliefs

You can leverage SigOpt to take advantage of your domain expertise on how metric values behave for certain parameters. These prior beliefs make the optimization process more efficient.

Defining the Prior Belief

Prior belief can be defined through the prior field for each continuous parameter. By default, SigOpt assumes that all parameters are uniformly distributed.

When the prior belief is set for a parameter, SigOpt is more likely to generate configurations from regions of this parameter with high probability density (PDF) value earlier in the SigOpt Experiment. Generally speaking, parameter configurations with PDF value 2 are twice as likely to be suggested as those with PDF value 1. The effect of prior belief is most notable during the initial portion of an experiment.

Example: Normally Distributed Parameters

Suppose in previous experiments, you have observed that the log_learning_rate parameter exhibits properties indicating that the highest performing values are likely to be normally distributed.

Defining this behavior at the start of your next optimization experiment can warm start the process and produce a lift in performance. For further information, check out our blog post and our webinar.

Creating an Experiment with Prior Beliefs

Core Module

from sigopt import Connection

conn = Connection(client_token="USER_TOKEN")
experiment = conn.experiments().create(
  name="xgboost with prior beliefs",
  parameters=[
    dict(
      name="log10_learning_rate",
      bounds=dict(
        min=-4,
        max=0
        ),
      prior=dict(
        name="beta",
        shape_a=2,
        shape_b=4.5
        ),
      type="double"
      ),
    dict(
      name="max_depth",
      bounds=dict(
        min=3,
        max=12
        ),
      type="int"
      ),
    dict(
      name="colsample_bytree",
      bounds=dict(
        min=0,
        max=1
        ),
      prior=dict(
        name="normal",
        mean=0.6,
        scale=0.15
        ),
      type="double"
      )
    ],
  metrics=[
    dict(
      name="AUPRC",
      objective="maximize",
      strategy="optimize"
      )
    ],
  observation_budget=65,
  parallel_bandwidth=2,
  type="offline"
  )
print("Created experiment: https://app.sigopt.com/experiment/" + experiment.id)

AI Module

experiment = sigopt.create_experiment(
  name="xgboost with prior beliefs",
  parameters=[
    dict(
      name="log10_learning_rate",
      bounds=dict(
        min=-4,
        max=0
        ),
      prior=dict(
        name="beta",
        shape_a=2,
        shape_b=4.5
        ),
      type="double"
      ),
    dict(
      name="max_depth",
      bounds=dict(
        min=3,
        max=12
        ),
      type="int"
      ),
    dict(
      name="colsample_bytree",
      bounds=dict(
        min=0,
        max=1
        ),
      prior=dict(
        name="normal",
        mean=0.6,
        scale=0.15
        ),
      type="double"
      )
    ],
  metrics=[
    dict(
      name="AUPRC",
      objective="maximize",
      strategy="optimize"
      )
    ],
  budget=65,
  parallel_bandwidth=2,
  type="offline"
  )
print("Created experiment: https://app.sigopt.com/experiment/" + experiment.id)

Updating Prior Beliefs

During the progress of an experiment, you can change your belief on how a particular parameter is distributed. The prior beliefs can be updated directly through our API. An example of this is given below, adjusting the prior belief on the learning rate.

Core Module

experiment = conn.experiments(experiment.id).update(
  parameters=[
    dict(
      name="log10_learning_rate",
      prior=dict(
        name="beta",
        shape_a=8,
        shape_b=2
        )
      ),
    dict(
      name="max_depth"
      ),
    dict(
      name="colsample_bytree"
      )
    ]
  )

AI Module

Updating the prior beliefs is not supported in the AI module.

Removing Prior Beliefs

The prior beliefs can be removed during the progress of an experiment. This means that we default to the belief that the parameter is uniformly distributed. You can simply set the prior field to None. The example given below shows how to remove the prior belief on the colsample_bytree parameter.

Core Module

experiment = conn.experiments(experiment.id).update(
  parameters=[
    dict(
      name="log10_learning_rate"
      ),
    dict(
      name="max_depth"
      ),
    dict(
      name="colsample_bytree",
      prior=None
      )
    ]
  )

AI Module

Removing the prior beliefs is not supported in the AI module.

Limitations

Last updated