Skip to Main content Skip to Navigation
Conference papers

Sparse Inverse Covariance Learning for CMA-ES with Graphical Lasso

Konstantinos Varelas 1, 2 Anne Auger 1 Nikolaus Hansen 1
1 RANDOPT - Randomized Optimisation
CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique, Inria Saclay - Ile de France
Abstract : This paper introduces a variant of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), denoted as gl-CMA-ES, that utilizes the Graphical Lasso regularization. Our goal is to efficiently solve partially separable optimization problems of a certain class by performing stochastic search with a search model parameterized by a sparse precision , i.e. inverse covariance matrix. We illustrate the effect of the global weight of the l1 regularizer and investigate how Graphical Lasso with non equal weights can be combined with CMA-ES, allowing to learn the conditional dependency structure of problems with sparse Hessian matrices. For non-separable sparse problems, the proposed method with appropriately selected weights, outperforms CMA-ES and improves its scaling, while for dense problems it maintains the same performance.
Complete list of metadatas

Cited literature [17 references]  Display  Hide  Download
Contributor : Konstantinos Varelas <>
Submitted on : Wednesday, October 7, 2020 - 3:07:21 PM
Last modification on : Thursday, October 8, 2020 - 3:35:58 AM


Files produced by the author(s)


  • HAL Id : hal-02960269, version 1



Konstantinos Varelas, Anne Auger, Nikolaus Hansen. Sparse Inverse Covariance Learning for CMA-ES with Graphical Lasso. PPSN 2020: the Sixteenth International Conference on Parallel Problem Solving from Nature, Sep 2020, Leiden, Netherlands. ⟨hal-02960269⟩



Record views


Files downloads