Skip to Main content Skip to Navigation
Conference papers

Variance Reduced Stochastic Gradient Descent with Neighbors

Abstract : Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its slow convergence can be a computational bottleneck. Variance reduction techniques such as SAG, SVRG and SAGA have been proposed to overcome this weakness, achieving linear convergence. However, these methods are either based on computations of full gradients at pivot points, or on keeping per data point corrections in memory. Therefore speed-ups relative to SGD may need a minimal number of epochs in order to materialize. This paper investigates algorithms that can exploit neighborhood structure in the training data to share and re-use information about past stochastic gradients across data points, which offers advantages in the transient optimization phase. As a side-product we provide a unified convergence analysis for a family of variance reduction algorithms, which we call memorization algorithms. We provide experimental results supporting our theory.
Complete list of metadata
Contributor : Simon Lacoste-Julien Connect in order to contact the contributor
Submitted on : Monday, December 28, 2015 - 4:30:01 AM
Last modification on : Wednesday, November 17, 2021 - 12:32:02 PM

Links full text


  • HAL Id : hal-01248672, version 1
  • ARXIV : 1506.03662



Thomas Hofmann, Aurelien Lucchi, Simon Lacoste-Julien, Brian Mcwilliams. Variance Reduced Stochastic Gradient Descent with Neighbors. NIPS 2015 - Advances in Neural Information Processing Systems 28, Dec 2015, Montreal, Canada. ⟨hal-01248672⟩



Record views