Benchmarking the Pure Random Search on the Bi-objective BBOB-2016 Testbed

Anne Auger 1 Dimo Brockhoff 2 Nikolaus Hansen 1 Dejan Tušar 2 Tea Tušar 2 Tobias Wagner 3
1 TAO - Machine Learning and Optimisation
CNRS - Centre National de la Recherche Scientifique : UMR8623, Inria Saclay - Ile de France, UP11 - Université Paris-Sud - Paris 11, LRI - Laboratoire de Recherche en Informatique
2 DOLPHIN - Parallel Cooperative Multi-criteria Optimization
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
Abstract : The Comparing Continuous Optimizers platform COCO has become a standard for benchmarking numerical (single-objective) optimization algorithms effortlessly. In 2016, COCO has been extended towards multi-objective optimization by providing a first bi-objective test suite. To provide a baseline, we benchmark a pure random search on this bi-objective bbob-biobj test suite of the COCO platform. For each combination of function, dimension n, and instance of the test suite, $10^6 · n$ candidate solutions are sampled uniformly within the sampling box $[−5, 5]^n$ .
Complete list of metadatas

https://hal.inria.fr/hal-01435455
Contributor : Dimo Brockhoff <>
Submitted on : Saturday, January 14, 2017 - 12:48:07 AM
Last modification on : Friday, March 22, 2019 - 1:35:31 AM
Long-term archiving on : Saturday, April 15, 2017 - 12:26:31 PM

File

wk0807-auger-RSsingle-authorve...
Files produced by the author(s)

Identifiers

Citation

Anne Auger, Dimo Brockhoff, Nikolaus Hansen, Dejan Tušar, Tea Tušar, et al.. Benchmarking the Pure Random Search on the Bi-objective BBOB-2016 Testbed. GECCO 2016 - Genetic and Evolutionary Computation Conference, Jul 2016, Denver, CO, United States. pp.1217-1223, ⟨10.1145/2908961.2931704⟩. ⟨hal-01435455⟩

Share

Metrics

Record views

784

Files downloads

129