Skip to Main content Skip to Navigation
New interface
Conference papers

Benchmarking the Pure Random Search on the Bi-objective BBOB-2016 Testbed

Anne Auger 1 Dimo Brockhoff 2 Nikolaus Hansen 1 Dejan Tušar 2 Tea Tušar 2 Tobias Wagner 3 
1 TAO - Machine Learning and Optimisation
LRI - Laboratoire de Recherche en Informatique, UP11 - Université Paris-Sud - Paris 11, Inria Saclay - Ile de France, CNRS - Centre National de la Recherche Scientifique : UMR8623
2 DOLPHIN - Parallel Cooperative Multi-criteria Optimization
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189
Abstract : The Comparing Continuous Optimizers platform COCO has become a standard for benchmarking numerical (single-objective) optimization algorithms effortlessly. In 2016, COCO has been extended towards multi-objective optimization by providing a first bi-objective test suite. To provide a baseline, we benchmark a pure random search on this bi-objective bbob-biobj test suite of the COCO platform. For each combination of function, dimension n, and instance of the test suite, $10^6 · n$ candidate solutions are sampled uniformly within the sampling box $[−5, 5]^n$ .
Complete list of metadata
Contributor : Dimo Brockhoff Connect in order to contact the contributor
Submitted on : Saturday, January 14, 2017 - 12:48:07 AM
Last modification on : Tuesday, November 22, 2022 - 2:26:16 PM
Long-term archiving on: : Saturday, April 15, 2017 - 12:26:31 PM


Files produced by the author(s)



Anne Auger, Dimo Brockhoff, Nikolaus Hansen, Dejan Tušar, Tea Tušar, et al.. Benchmarking the Pure Random Search on the Bi-objective BBOB-2016 Testbed. GECCO 2016 - Genetic and Evolutionary Computation Conference, Jul 2016, Denver, CO, United States. pp.1217-1223, ⟨10.1145/2908961.2931704⟩. ⟨hal-01435455⟩



Record views


Files downloads