Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles

Yann Ollivier 1, 2, 3 Ludovic Arnold 3, 2 Anne Auger 4, 5, 3 Nikolaus Hansen 4, 5, 3
1 TAU - TAckling the Underspeficied
LRI - Laboratoire de Recherche en Informatique, UP11 - Université Paris-Sud - Paris 11, Inria Saclay - Ile de France, CNRS - Centre National de la Recherche Scientifique : UMR8623
3 TAO - Machine Learning and Optimisation
LRI - Laboratoire de Recherche en Informatique, UP11 - Université Paris-Sud - Paris 11, Inria Saclay - Ile de France, CNRS - Centre National de la Recherche Scientifique : UMR8623
5 RANDOPT - Randomized Optimisation
Inria Saclay - Ile de France
Abstract : We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space X into a continuous-time black-box optimization method on X, the information-geometric optimization (IGO) method. Invariance as a major design principle keeps the number of arbitrary choices to a minimum. The resulting IGO flow is the flow of an ordinary differential equation conducting the natural gradient ascent of an adaptive, time-dependent transformation of the objective function. It makes no particular assumptions on the objective function to be optimized. The IGO method produces explicit IGO algorithms through time discretization. It naturally recovers versions of known algorithms and offers a systematic way to derive new ones. In continuous search spaces, IGO algorithms take a form related to natural evolution strategies (NES). The cross-entropy method is recovered in a particular case with a large time step, and can be extended into a smoothed, parametrization-independent maximum likelihood update (IGO-ML). When applied to the family of Gaussian distributions on R^d, the IGO framework recovers a version of the well-known CMA-ES algorithm and of xNES. For the family of Bernoulli distributions on {0, 1}^d, we recover the seminal PBIL algorithm and cGA. For the distributions of restricted Boltzmann machines, we naturally obtain a novel algorithm for discrete optimization on {0, 1}^d‘. All these algorithms are natural instances of, and unified under, the single information-geometric optimization framework. The IGO method achieves, thanks to its intrinsic formulation, maximal invariance properties: invariance under reparametrization of the search space X‹, under a change of parameters of the probability distribution, and under increasing transformation of the function to be optimized. The latter is achieved through an adaptive, quantile-based formulation of the objective. Theoretical considerations strongly suggest that IGO algorithms are essentially characterized by a minimal change of the distribution over time. Therefore they have minimal loss in diversity through the course of optimization, provided the initial diversity is high. First experiments using restricted Boltzmann machines confirm this insight. As a simple consequence, IGO seems to provide, from information theory, an elegant way to simultaneously explore several valleys of a fitness landscape in a single run.
Type de document :
Article dans une revue
Journal of Machine Learning Research, Journal of Machine Learning Research, 2017, 18 (18), pp.1-65
Liste complète des métadonnées

https://hal.inria.fr/hal-01515898
Contributeur : Nikolaus Hansen <>
Soumis le : vendredi 28 avril 2017 - 11:37:49
Dernière modification le : jeudi 10 mai 2018 - 02:04:13
Document(s) archivé(s) le : samedi 29 juillet 2017 - 13:01:21

Fichier

14-467.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01515898, version 1

Citation

Yann Ollivier, Ludovic Arnold, Anne Auger, Nikolaus Hansen. Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles. Journal of Machine Learning Research, Journal of Machine Learning Research, 2017, 18 (18), pp.1-65. 〈hal-01515898〉

Partager

Métriques

Consultations de la notice

531

Téléchargements de fichiers

163