Hybrid Deterministic-Stochastic Methods for Data Fitting

Michael P. Friedlander 1 Mark Schmidt 2, 3
2 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, ENS Paris - École normale supérieure - Paris, Inria Paris-Rocquencourt, CNRS - Centre National de la Recherche Scientifique : UMR8548
Abstract : Many structured data-fitting applications require the solution of an optimization problem involving a sum over a potentially large number of measurements. Incremental gradient algorithms offer inexpensive iterations by sampling only subsets of the terms in the sum. These methods can make great progress initially, but often slow as they approach a solution. In contrast, full gradient methods achieve steady convergence at the expense of evaluating the full objective and gradient on each iteration. We explore hybrid methods that exhibit the benefits of both approaches. Rate of convergence analysis shows that by controlling the size of the subsets in an incremental gradient algorithm, it is possible to maintain the steady convergence rates of full gradient methods. We detail a practical quasi-Newton implementation based on this approach, and numerical experiments illustrate its potential benefits.
Type de document :
Article dans une revue
SIAM Journal on Scientific Computing, Society for Industrial and Applied Mathematics, 2012, 34 (3), pp.A1380-A1405. 〈10.1137/110830629〉
Liste complète des métadonnées

https://hal.inria.fr/inria-00626571
Contributeur : Mark Schmidt <>
Soumis le : lundi 26 septembre 2011 - 15:01:01
Dernière modification le : vendredi 25 mai 2018 - 12:02:06

Lien texte intégral

Identifiants

Collections

Citation

Michael P. Friedlander, Mark Schmidt. Hybrid Deterministic-Stochastic Methods for Data Fitting. SIAM Journal on Scientific Computing, Society for Industrial and Applied Mathematics, 2012, 34 (3), pp.A1380-A1405. 〈10.1137/110830629〉. 〈inria-00626571〉

Partager

Métriques

Consultations de la notice

292