Estimate Sequences for Variance-Reduced Stochastic Composite Optimization

Andrei Kulunchakov 1 Julien Mairal 1
1 Thoth - Apprentissage de modèles à partir de données massives
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann
Abstract : In this paper, we propose a unified view of gradient-based algorithms for stochastic convex composite optimization by extending the concept of estimate sequence introduced by Nesterov. This point of view covers the stochastic gradient descent method, variants of the approaches SAGA, SVRG, and has several advantages: (i) we provide a generic proof of convergence for the aforementioned methods; (ii) we show that this SVRG variant is adaptive to strong convexity; (iii) we naturally obtain new algorithms with the same guarantees; (iv) we derive generic strategies to make these algorithms robust to stochastic noise, which is useful when data is corrupted by small random perturbations. Finally, we show that this viewpoint is useful to obtain new accelerated algorithms in the sense of Nesterov.
Complete list of metadatas

https://hal.inria.fr/hal-02121913
Contributor : Julien Mairal <>
Submitted on : Monday, May 6, 2019 - 10:48:28 PM
Last modification on : Tuesday, September 10, 2019 - 1:38:23 PM

Files

main.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02121913, version 1
  • ARXIV : 1905.02374

Collections

Citation

Andrei Kulunchakov, Julien Mairal. Estimate Sequences for Variance-Reduced Stochastic Composite Optimization. ICML 2019 - 36th International Conference on Machine Learning, Jun 2019, Long Beach, United States. pp.1-24. ⟨hal-02121913⟩

Share

Metrics

Record views

127

Files downloads

359