2 years ago

#40206

test-img

no_nonsense_45

Why does python bayesian optimization give wrong answers

I am using the Python package dragonfly to perform some Bayesian optimization. I am testing out on a simple example (minimize y=x within [-10, 10], the answer is clearly -10). However, I get different results depending on how I implement "minimise" function:

import numpy as np
from dragonfly import load_config, minimise_function

def fopt(x):
  tx = np.array([x[0][0]])
  return tx + 0.01*(np.random.random() - 1)

domain_vars = [{'type': 'float',  'min': -10.0, 'max': 10.0, 'dim': 1}]
config_params = {'domain': domain_vars}
newconfig = load_config(config_params)
max_num_evals = 100
opt_val, opt_pt, history = minimise_function(fopt, newconfig.domain, max_num_evals,
                                               config=newconfig)
print(opt_val, opt_pt)

The printed output is: -10.00976228312294 [array([1.59872116e-14])], which is clearly finding the minimum value, but not the minimum point (should be -10).

However, if I use a different method:

min_val, min_pt, history = minimise_function(lambda x: x + 0.01*(np.random.random() - 1), [[-10, 10]], 100)
print(min_val, min_pt)

I get the correct answer: -10.009947789035548 [-10.]

Why is this the case?

python

bayesian

nonlinear-optimization

0 Answers

Your Answer

Accepted video resources