Apply Prior Knowledge
You can leverage SigOpt to take advantage of your domain expertise on how metric values behave for certain parameters. These prior beliefs make the optimization process more efficient.

Defining the Prior Distribution

Knowledge of Prior Distributions can be defined through the prior field for each continuous parameter. By default, SigOpt assumes that all parameters have a uniform prior distribution. You can inspect the probability density function of the prior belief distributions and generate the corresponding code snippet using the interactive tool below.
When the prior belief is set for a parameter, SigOpt is more likely to generate configurations from regions of this parameter with high probability density (PDF) value earlier in the SigOpt Experiment. Generally speaking, parameter configurations with PDF value 2 are twice as likely to be suggested as those with PDF value 1. The effect of prior belief is most notable during the initial portion of an experiment.

Example: Normally Distributed Parameters

Suppose in prior experiments, you observe that the log_learning_rate parameter exhibits properties indicating that the highest performing values are normally distributed.
Defining this behavior at the start of your next optimization experiment can warm start the process and produce a lift in performance. For further information, check out our blog post and our webinar.

Defining Prior Distributions with Code

Normal Distribution:
bounds=dict(min=np.log(0.000001), max=np.log(1)),
prior=dict(name="normal", mean=-1.5, scale=1),
Beta Distribution:
bounds=dict(min=0, max=1),
prior=dict(name="beta", shape_a=2, shape_b=4.5),