Skip to Main content Skip to Navigation
Conference papers

Sparse Inverse Covariance Learning for CMA-ES with Graphical Lasso

Konstantinos Varelas 1, 2 Anne Auger 1 Nikolaus Hansen 1
1 RANDOPT - Randomized Optimisation
CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique, Inria Saclay - Ile de France
Abstract : This paper introduces a variant of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), denoted as gl-CMA-ES, that utilizes the Graphical Lasso regularization. Our goal is to efficiently solve partially separable optimization problems of a certain class by performing stochastic search with a search model parameterized by a sparse precision , i.e. inverse covariance matrix. We illustrate the effect of the global weight of the l1 regularizer and investigate how Graphical Lasso with non equal weights can be combined with CMA-ES, allowing to learn the conditional dependency structure of problems with sparse Hessian matrices. For non-separable sparse problems, the proposed method with appropriately selected weights, outperforms CMA-ES and improves its scaling, while for dense problems it maintains the same performance.
Complete list of metadata
Contributor : Konstantinos Varelas Connect in order to contact the contributor
Submitted on : Saturday, January 23, 2021 - 8:01:50 AM
Last modification on : Friday, January 21, 2022 - 3:11:03 AM


Files produced by the author(s)


  • HAL Id : hal-02960269, version 2


Konstantinos Varelas, Anne Auger, Nikolaus Hansen. Sparse Inverse Covariance Learning for CMA-ES with Graphical Lasso. PPSN 2020 - Sixteenth International Conference on Parallel Problem Solving from Nature, Sep 2020, Leiden, Netherlands. ⟨hal-02960269v2⟩



Les métriques sont temporairement indisponibles