SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives

Aaron Defazio 1 Francis Bach 2, 3, 4 Simon Lacoste-Julien 2, 3, 4
2 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, ENS Paris - École normale supérieure - Paris, Inria Paris-Rocquencourt, CNRS - Centre National de la Recherche Scientifique : UMR8548
Abstract : In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.
Document type :
Conference papers
Advances In Neural Information Processing Systems, Nov 2014, Montreal, Canada
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01016843
Contributor : Francis Bach <>
Submitted on : Wednesday, November 12, 2014 - 6:54:01 PM
Last modification on : Wednesday, September 28, 2016 - 4:11:25 PM
Document(s) archivé(s) le : Friday, February 13, 2015 - 11:26:02 AM

Files

nips2014.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01016843, version 3
  • ARXIV : 1407.0202

Collections

Citation

Aaron Defazio, Francis Bach, Simon Lacoste-Julien. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives. Advances In Neural Information Processing Systems, Nov 2014, Montreal, Canada. 〈hal-01016843v3〉

Share

Metrics

Record views

302

Files downloads

188