Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization

Fabian Pedregosa 1 Rémi Leblond 1 Simon Lacoste-Julien 2
1 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
Abstract : Due to their simplicity and excellent performance, parallel asynchronous variants of stochastic gradient descent have become popular methods to solve a wide range of large-scale optimization problems on multi-core architectures. Yet, despite their practical success, support for nonsmooth objectives is still lacking, making them unsuitable for many problems of interest in machine learning, such as the Lasso, group Lasso or empirical risk minimization with convex constraints. In this work, we propose and analyze ProxASAGA, a fully asynchronous sparse method inspired by SAGA, a variance reduced incremental gradient algorithm. The proposed method is easy to implement and significantly outperforms the state of the art on several nonsmooth, large-scale problems. We prove that our method achieves a theoretical linear speedup with respect to the sequential version under assumptions on the sparsity of gradients and block-separability of the proximal term. Empirical benchmarks on a multi-core architecture illustrate practical speedups of up to 12x on a 20-core machine.
Type de document :
Communication dans un congrès
NIPS 2017 - Thirty-First Annual Conference on Neural Information Processing Systems, Dec 2017, Long Beach, United States. pp.1-28, 2017
Liste complète des métadonnées

https://hal.inria.fr/hal-01638058
Contributeur : Fabian Pedregosa <>
Soumis le : dimanche 19 novembre 2017 - 06:03:26
Dernière modification le : jeudi 26 avril 2018 - 10:28:58
Document(s) archivé(s) le : mardi 20 février 2018 - 12:29:38

Identifiants

  • HAL Id : hal-01638058, version 1
  • ARXIV : 1707.06468

Collections

Citation

Fabian Pedregosa, Rémi Leblond, Simon Lacoste-Julien. Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization. NIPS 2017 - Thirty-First Annual Conference on Neural Information Processing Systems, Dec 2017, Long Beach, United States. pp.1-28, 2017. 〈hal-01638058〉

Partager

Métriques

Consultations de la notice

347

Téléchargements de fichiers

153