The main goal of the black box optimization toolkit (bbotk
) is to provide a common framework for optimization for other packages. Therefore bbotk
includes the following R6 classes that can be used in a variety of optimization scenarios.
Optimizer
: Objects of this class allow you to optimize an object of the class OptimInstance
.OptimInstance
: Defines the optimization problem, consisting of an Objective
, the search_space
and a Terminator
. All evaluations on the OptimInstance
will be automatically stored in its own Archive
.Objective
: Objects of this class contain the objective function. The class ensures that the objective function is called in the right way and defines, whether the function should be minimized or maximized.Terminator
: Objects of this class control the termination of the optimization independent of the optimizer.As bbotk
also includes some basic optimizers and can be used on its own. The registered optimizers can be queried as follows:
library(bbotk)
#> Loading required package: paradox
opts()
#> <DictionaryOptimizer> with 8 stored values
#> Keys: cmaes, design_points, focus_search, gensa, grid_search, irace,
#> nloptr, random_search
This Vignette will show you how to use the bbotk
-classes to solve a simple optimization problem. Furthermore, you will learn how to
Objective
.OptimInstance
search_space
.Archive
of evaluated function calls.bbotk
to optimize a functionIn the following we will use bbotk
to minimize this function:
function(xs) {
fun =c(y = - (xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10)
}
First we need to wrap fun
inside an Objective
object. For functions that expect a list as input we can use the ObjectiveRFun
class. Additionally, we need to specify the domain, i.e. the space of x-values that the function accepts as an input. Optionally, we can define the co-domain, i.e. the output space of our objective function. This is only necessary if we want to deviate from the default which would define the output to be named y and be minimized. Such spaces are defined using the package paradox
.
library(paradox)
ps(
domain =x1 = p_dbl(-10, 10),
x2 = p_dbl(-5, 5)
) ps(
codomain =y = p_dbl(tags = "maximize")
) ObjectiveRFun$new(
obfun =fun = fun,
domain = domain,
codomain = codomain,
properties = "deterministic" # i.e. the result always returns the same result for the same input.
)
In the next step we decide when the optimization should stop. We can list all available terminators as follows:
trms()
#> <DictionaryTerminator> with 8 stored values
#> Keys: clock_time, combo, evals, none, perf_reached, run_time,
#> stagnation, stagnation_batch
The termination should stop, when it takes longer then 10 seconds or when 20 evaluations are reached.
list(
terminators =evals = trm("evals", n_evals = 20),
run_time = trm("run_time")
)
terminators#> $evals
#> <TerminatorEvals>: Number of Evaluation
#> * Parameters: n_evals=20, k=0
#>
#> $run_time
#> <TerminatorRunTime>: Run Time
#> * Parameters: secs=30
We have to correct the default of secs=30
by setting the values
in the param_set
of the terminator.
$run_time$param_set$values$secs = 10 terminators
We have created Terminator
objects for both of our criteria. To combine them we use the combo Terminator
.
TerminatorCombo$new(terminators = terminators) term_combo =
Before we finally start the optimization, we have to create an OptimInstance
that contains also the Objective
and the Terminator
.
OptimInstanceSingleCrit$new(objective = obfun, terminator = term_combo)
instance =
instance#> <OptimInstanceSingleCrit>
#> * State: Not optimized
#> * Objective: <ObjectiveRFun:function>
#> * Search Space:
#> id class lower upper nlevels
#> 1: x1 ParamDbl -10 10 Inf
#> 2: x2 ParamDbl -5 5 Inf
#> * Terminator: <TerminatorCombo>
Note, that OptimInstance(SingleCrit/MultiCrit)$new()
also has an optional search_space
argument. It can be used if the search_space
is only a subset of obfun$domain
or if you want to apply transformations. More on that later.
Finally, we have to define an Optimizer
. As we have seen above, that we can call opts()
to list all available optimizers. We opt for evolutionary optimizer, from the GenSA
package.
opt("gensa")
optimizer =
optimizer#> <OptimizerGenSA>: Generalized Simulated Annealing
#> * Parameters: trace.mat=FALSE
#> * Parameter classes: ParamDbl
#> * Properties: single-crit
#> * Packages: bbotk, GenSA
To start the optimization we have to call the Optimizer
on the OptimInstance
.
$optimize(instance)
optimizer#> INFO [11:44:28.067] [bbotk] Starting to optimize 2 parameter(s) with '<OptimizerGenSA>' and '<TerminatorCombo> [any=TRUE]'
#> INFO [11:44:28.108] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.124] [bbotk] Result of batch 1:
#> INFO [11:44:28.126] [bbotk] x1 x2 y
#> INFO [11:44:28.126] [bbotk] -4.689827 -1.278761 -37.71645
#> INFO [11:44:28.129] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.140] [bbotk] Result of batch 2:
#> INFO [11:44:28.145] [bbotk] x1 x2 y
#> INFO [11:44:28.145] [bbotk] -5.930364 -4.400474 -54.852
#> INFO [11:44:28.148] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.154] [bbotk] Result of batch 3:
#> INFO [11:44:28.155] [bbotk] x1 x2 y
#> INFO [11:44:28.155] [bbotk] 7.170817 -1.519948 -18.92791
#> INFO [11:44:28.158] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.164] [bbotk] Result of batch 4:
#> INFO [11:44:28.166] [bbotk] x1 x2 y
#> INFO [11:44:28.166] [bbotk] 2.0452 -1.519948 7.807403
#> INFO [11:44:28.169] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.174] [bbotk] Result of batch 5:
#> INFO [11:44:28.176] [bbotk] x1 x2 y
#> INFO [11:44:28.176] [bbotk] 2.0452 -2.064742 9.12325
#> INFO [11:44:28.178] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.184] [bbotk] Result of batch 6:
#> INFO [11:44:28.185] [bbotk] x1 x2 y
#> INFO [11:44:28.185] [bbotk] 2.0452 -2.064742 9.12325
#> INFO [11:44:28.188] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.193] [bbotk] Result of batch 7:
#> INFO [11:44:28.195] [bbotk] x1 x2 y
#> INFO [11:44:28.195] [bbotk] 2.045201 -2.064742 9.12325
#> INFO [11:44:28.197] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.203] [bbotk] Result of batch 8:
#> INFO [11:44:28.205] [bbotk] x1 x2 y
#> INFO [11:44:28.205] [bbotk] 2.045199 -2.064742 9.12325
#> INFO [11:44:28.207] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.212] [bbotk] Result of batch 9:
#> INFO [11:44:28.214] [bbotk] x1 x2 y
#> INFO [11:44:28.214] [bbotk] 2.0452 -2.064741 9.123248
#> INFO [11:44:28.217] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.223] [bbotk] Result of batch 10:
#> INFO [11:44:28.225] [bbotk] x1 x2 y
#> INFO [11:44:28.225] [bbotk] 2.0452 -2.064743 9.123252
#> INFO [11:44:28.227] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.233] [bbotk] Result of batch 11:
#> INFO [11:44:28.235] [bbotk] x1 x2 y
#> INFO [11:44:28.235] [bbotk] 1.9548 -3.935258 9.12325
#> INFO [11:44:28.238] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.246] [bbotk] Result of batch 12:
#> INFO [11:44:28.248] [bbotk] x1 x2 y
#> INFO [11:44:28.248] [bbotk] 1.954801 -3.935258 9.12325
#> INFO [11:44:28.251] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.256] [bbotk] Result of batch 13:
#> INFO [11:44:28.258] [bbotk] x1 x2 y
#> INFO [11:44:28.258] [bbotk] 1.954799 -3.935258 9.12325
#> INFO [11:44:28.261] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.266] [bbotk] Result of batch 14:
#> INFO [11:44:28.268] [bbotk] x1 x2 y
#> INFO [11:44:28.268] [bbotk] 1.9548 -3.935257 9.123252
#> INFO [11:44:28.270] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.276] [bbotk] Result of batch 15:
#> INFO [11:44:28.278] [bbotk] x1 x2 y
#> INFO [11:44:28.278] [bbotk] 1.9548 -3.935259 9.123248
#> INFO [11:44:28.281] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.287] [bbotk] Result of batch 16:
#> INFO [11:44:28.288] [bbotk] x1 x2 y
#> INFO [11:44:28.288] [bbotk] 2 -3 10
#> INFO [11:44:28.291] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.296] [bbotk] Result of batch 17:
#> INFO [11:44:28.298] [bbotk] x1 x2 y
#> INFO [11:44:28.298] [bbotk] 2.000001 -3 10
#> INFO [11:44:28.301] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.306] [bbotk] Result of batch 18:
#> INFO [11:44:28.308] [bbotk] x1 x2 y
#> INFO [11:44:28.308] [bbotk] 1.999999 -3 10
#> INFO [11:44:28.311] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.316] [bbotk] Result of batch 19:
#> INFO [11:44:28.318] [bbotk] x1 x2 y
#> INFO [11:44:28.318] [bbotk] 2 -2.999999 10
#> INFO [11:44:28.321] [bbotk] Evaluating 1 configuration(s)
#> INFO [11:44:28.326] [bbotk] Result of batch 20:
#> INFO [11:44:28.328] [bbotk] x1 x2 y
#> INFO [11:44:28.328] [bbotk] 2 -3.000001 10
#> INFO [11:44:28.335] [bbotk] Finished optimizing after 20 evaluation(s)
#> INFO [11:44:28.336] [bbotk] Result:
#> INFO [11:44:28.338] [bbotk] x1 x2 x_domain y
#> INFO [11:44:28.338] [bbotk] 2 -3 <list[2]> 10
#> x1 x2 x_domain y
#> 1: 2 -3 <list[2]> 10
Note, that we did not specify the termination inside the optimizer. bbotk
generally sets the termination of the optimizers to never terminate and instead breaks the code internally as soon as a termination criterion is fulfilled.
The results can be queried from the OptimInstance
.
# result as a data.table
$result
instance#> x1 x2 x_domain y
#> 1: 2 -3 <list[2]> 10
# result as a list that can be passed to the Objective
$result_x_domain
instance#> $x1
#> [1] 2
#>
#> $x2
#> [1] -3
# result outcome
$result_y
instance#> y
#> 10
You can also access the whole history of evaluated points.
as.data.table(instance$archive)
#> x1 x2 y timestamp batch_nr x_domain_x1
#> 1: -4.689827 -1.278761 -37.716445 2022-08-25 11:44:28 1 -4.689827
#> 2: -5.930364 -4.400474 -54.851999 2022-08-25 11:44:28 2 -5.930364
#> 3: 7.170817 -1.519948 -18.927907 2022-08-25 11:44:28 3 7.170817
#> 4: 2.045200 -1.519948 7.807403 2022-08-25 11:44:28 4 2.045200
#> 5: 2.045200 -2.064742 9.123250 2022-08-25 11:44:28 5 2.045200
#> 6: 2.045200 -2.064742 9.123250 2022-08-25 11:44:28 6 2.045200
#> 7: 2.045201 -2.064742 9.123250 2022-08-25 11:44:28 7 2.045201
#> 8: 2.045199 -2.064742 9.123250 2022-08-25 11:44:28 8 2.045199
#> 9: 2.045200 -2.064741 9.123248 2022-08-25 11:44:28 9 2.045200
#> 10: 2.045200 -2.064743 9.123252 2022-08-25 11:44:28 10 2.045200
#> 11: 1.954800 -3.935258 9.123250 2022-08-25 11:44:28 11 1.954800
#> 12: 1.954801 -3.935258 9.123250 2022-08-25 11:44:28 12 1.954801
#> 13: 1.954799 -3.935258 9.123250 2022-08-25 11:44:28 13 1.954799
#> 14: 1.954800 -3.935257 9.123252 2022-08-25 11:44:28 14 1.954800
#> 15: 1.954800 -3.935259 9.123248 2022-08-25 11:44:28 15 1.954800
#> 16: 2.000000 -3.000000 10.000000 2022-08-25 11:44:28 16 2.000000
#> 17: 2.000001 -3.000000 10.000000 2022-08-25 11:44:28 17 2.000001
#> 18: 1.999999 -3.000000 10.000000 2022-08-25 11:44:28 18 1.999999
#> 19: 2.000000 -2.999999 10.000000 2022-08-25 11:44:28 19 2.000000
#> 20: 2.000000 -3.000001 10.000000 2022-08-25 11:44:28 20 2.000000
#> x_domain_x2
#> 1: -1.278761
#> 2: -4.400474
#> 3: -1.519948
#> 4: -1.519948
#> 5: -2.064742
#> 6: -2.064742
#> 7: -2.064742
#> 8: -2.064742
#> 9: -2.064741
#> 10: -2.064743
#> 11: -3.935258
#> 12: -3.935258
#> 13: -3.935258
#> 14: -3.935257
#> 15: -3.935259
#> 16: -3.000000
#> 17: -3.000000
#> 18: -3.000000
#> 19: -2.999999
#> 20: -3.000001
If the domain of the Objective
is complex, it is often necessary to define a simpler search space that can be handled by the Optimizer
and to define a transformation that transforms the value suggested by the optimizer to a value of the domain of the Objective
.
Reasons for transformations can be:
x3
the change from 0.1
to 0.2
has a similar effect as the change of 100
to 200
.In the following we will look at an example for 2.)
We want to construct a box with the maximal volume, with the restriction that h+w+d = 1
. For simplicity we define our problem as a minimization problem.
function(xs) {
fun_volume =c(y = - (xs$h * xs$w * xs$d))
} ps(
domain =h = p_dbl(lower = 0),
w = p_dbl(lower = 0),
d = p_dbl(lower = 0)
) ObjectiveRFun$new(
obj =fun = fun_volume,
domain = domain
)
We notice, that our optimization problem has three parameters but due to the restriction it only the degree of freedom 2. Therefore we only need to optimize two parameters and calculate h
, w
, d
accordingly.
ps(
search_space =h = p_dbl(lower = 0, upper = 1),
w = p_dbl(lower = 0, upper = 1),
.extra_trafo = function(x, param_set){
unlist(x)
x ="d"] = 2 - sum(x) # model d in dependence of h, w
x[ x/sum(x) # ensure that h+w+d = 1
x =as.list(x)
} )
Instead of the domain of the Objective
we now use our constructed search_space
that includes the trafo
for the OptimInstance
.
OptimInstanceSingleCrit$new(
inst =objective = obj,
search_space = search_space,
terminator = trm("evals", n_evals = 30)
) opt("gensa")
optimizer = lgr::get_logger("bbotk")$set_threshold("warn") # turn off console output
lg =$optimize(inst)
optimizer#> h w x_domain y
#> 1: 0.6647984 0.6671394 <list[3]> -0.0370368
lgr::get_logger("bbotk")$set_threshold("info") # turn on console output lg =
The optimizer only saw the search space during optimization and returns the following result:
$result_x_search_space
inst#> h w
#> 1: 0.6647984 0.6671394
Internally, the OptimInstance
transformed these values to the domain so that the result for the Objective
looks as follows:
$result_x_domain
inst#> $h
#> [1] 0.3323992
#>
#> $w
#> [1] 0.3335697
#>
#> $d
#> [1] 0.3340311
$eval(inst$result_x_domain)
obj#> y
#> -0.0370368
The following is just meant for advanced readers. If you want to evaluate the function outside of the optimization but have the result stored in the Archive
you can do so by resetting the termination and call $eval_batch()
.
library(data.table)
$terminator = trm("none")
inst data.table(h = c(0.6666, 0.6667), w = c(0.6666, 0.6667))
xvals =$eval_batch(xdt = xvals)
inst#> INFO [11:44:28.840] [bbotk] Evaluating 2 configuration(s)
#> INFO [11:44:28.851] [bbotk] Result of batch 31:
#> INFO [11:44:28.854] [bbotk] h w y
#> INFO [11:44:28.854] [bbotk] 0.6666 0.6666 -0.03703704
#> INFO [11:44:28.854] [bbotk] 0.6667 0.6667 -0.03703704
tail(as.data.table(instance$archive))
#> x1 x2 y timestamp batch_nr x_domain_x1
#> 1: 1.954800 -3.935259 9.123248 2022-08-25 11:44:28 15 1.954800
#> 2: 2.000000 -3.000000 10.000000 2022-08-25 11:44:28 16 2.000000
#> 3: 2.000001 -3.000000 10.000000 2022-08-25 11:44:28 17 2.000001
#> 4: 1.999999 -3.000000 10.000000 2022-08-25 11:44:28 18 1.999999
#> 5: 2.000000 -2.999999 10.000000 2022-08-25 11:44:28 19 2.000000
#> 6: 2.000000 -3.000001 10.000000 2022-08-25 11:44:28 20 2.000000
#> x_domain_x2
#> 1: -3.935259
#> 2: -3.000000
#> 3: -3.000000
#> 4: -3.000000
#> 5: -2.999999
#> 6: -3.000001
However, this does not update the result. You could set the result by calling inst$assign_result()
but this should be handled by the optimizer. Another way to get the best point is the following:
$archive$best()
inst#> h w y x_domain timestamp batch_nr
#> 1: 0.6667 0.6667 -0.03703704 <list[3]> 2022-08-25 11:44:28 31
Note, that this method just looks for the best outcome and returns the according row from the archive. For stochastic optimization problems this is overly optimistic and leads to biased results. For this reason the optimizer can use advanced methods to determine the result and set it itself.