Benchmarking Against the Bests (BBOB)

Once you’ve created your own algorithm, the structure of DEAP allows you to benchmark it against the best algorithms very easily. The interface of the Black-Box Optimization Benchmark (BBOB) is compatible with the toolbox. In fact, once your new algorithm is encapsulated in a main function, there is almost nothing else to do on DEAP’s side. This tutorial will review the essential steps to bring everything to work with the very basic One Fifth Rule.

Preparing the Algorithm

The BBOB makes use of many continuous functions on which the algorithm will be tested. These functions are given as an argument to the algorithm. The toolbox shall thus register the evaluation in the main function.

The evaluation functions provided by BBOB return a fitness as a single value. The first step is to put each fitness in its own tuple, as required by DEAP’s philosophy on single objective optimization. We will use a decorator for this.

def tupleize(func):
    """A decorator that tuple-ize the result of a function. This is useful
    when the evaluation function returns a single value.
    def wrapper(*args, **kargs):
        return func(*args, **kargs),
    return wrapper

The algorithm is encapsulated in a main function that receives four arguments: the evaluation function, the dimensionality of the problem, the maximum number of evaluations and the target value to reach. As stated earlier, the toolbox is initialized in the main function with the update() function (described in the example) and the evaluation function received, which is decorated by our tuple-izer.

Then, the target fitness value is encapsulated in a FitnessMin object so that we can easily compare the individuals with it. The last step is to define the algorithm, which is explained in the One Fifth Rule example.

def main(func, dim, maxfuncevals, ftarget=None):
    toolbox = base.Toolbox()
    toolbox.register("update", update)
    toolbox.register("evaluate", func)
    toolbox.decorate("evaluate", tupleize)

    # Create the desired optimal function value as a Fitness object
    # for later comparison
    opt = creator.FitnessMin((ftarget,))

    # Interval in which to initialize the optimizer
    interval = -5, 5
    sigma = (interval[1] - interval[0]) / 2.0
    alpha = 2.0**(1.0 / dim)

    # Initialize best randomly and worst as a place holder
    best = creator.Individual(random.uniform(interval[0], interval[1]) for _ in range(dim))
    worst = creator.Individual([0.0] * dim)

    # Evaluate the first individual = toolbox.evaluate(best)

    # Evolve until ftarget is reached or the number of evaluation
    # is exhausted (maxfuncevals)
    for g in range(1, maxfuncevals):
        toolbox.update(worst, best, sigma) = toolbox.evaluate(worst)
        if <=
            # Increase mutation strength and swap the individual
            sigma = sigma * alpha
            best, worst = worst, best
            # Decrease mutation strength
            sigma = sigma * alpha**(-0.25)

        # Test if we reached the optimum of the function
        # Remember that ">" for fitness means better (not greater)
        if > opt:
            return best

    return best

Running the Benchmark

Now that the algorithm is ready, it is time to run it under the BBOB. The following code is taken from the BBOB example with added comments. The fgeneric module provides a LoggingFunction, which take care of outputting all necessary data to compare the tested algorithm with the other ones published and to be published.

This logger contains the current problem instance and provides the problem target. Since it is responsible of logging each evaluation function call, it is not even needed to save the best individual found by our algorithm (call to the main() function). The single line that is related to the provided algorithm in the call to the main() function.

import fgeneric
import bbobbenchmarks as bn

    return best

if __name__ == "__main__":
    # Maximum number of restart for an algorithm that detects stagnation
    maxrestarts = 1000

    # Create a COCO experiment that will log the results under the
    # ./output directory
    e = fgeneric.LoggingFunction("output")

    # Iterate over all desired test dimensions
    for dim in (2, 3, 5, 10, 20, 40):
        # Set the maximum number function evaluation granted to the algorithm
        # This is usually function of the dimensionality of the problem
        maxfuncevals = 100 * dim**2
        minfuncevals = dim + 2

        # Iterate over a set of benchmarks (noise free benchmarks here)
        for f_name in bn.nfreeIDs:

            # Iterate over all the instance of a single problem
            # Rotation, translation, etc.
            for instance in chain(range(1, 6), range(21, 31)):

                # Set the function to be used (problem) in the logger
                e.setfun(*bn.instantiate(f_name, iinstance=instance))

                # Independent restarts until maxfunevals or ftarget is reached
                for restarts in range(0, maxrestarts + 1):
                    if restarts > 0:
                        # Signal the experiment that the algorithm restarted
                        e.restart('independent restart')  # additional info

                    # Run the algorithm with the remaining number of evaluations
                    revals = int(math.ceil(maxfuncevals - e.evaluations))
                    main(e.evalfun, dim, revals, e.ftarget)

                    # Stop if ftarget is reached
                    if e.fbest < e.ftarget or e.evaluations + minfuncevals > maxfuncevals:


                print('f%d in %d-D, instance %d: FEs=%d with %d restarts, '
                      % (f_name, dim, instance, e.evaluations, restarts,
                         e.fbest - e.ftarget))

Once these experiments are done, the data contained in the ouput directory can be used to build the results document. See the BBOB web site on how to build the document.

The complete example is available in the file examples/%sbbob.