Stochastic Gradient Descent: Going As Fast As Possible But Not Faster

Alice Schoenauer-Sebag 1 Marc Schoenauer 2, 3 Michèle Sebag 3, 2
2 TAU - TAckling the Underspeficied
LRI - Laboratoire de Recherche en Informatique, UP11 - Université Paris-Sud - Paris 11, Inria Saclay - Ile de France, CNRS - Centre National de la Recherche Scientifique : UMR8623
Abstract : When applied to training deep neural networks, stochastic gradient descent (SGD) often incurs steady progression phases, interrupted by catastrophic episodes in which loss and gradient norm explode. A possible mitigation of such events is to slow down the learning process. This paper presents a novel approach to control the SGD learning rate, that uses two statistical tests. The first one, aimed at fast learning, compares the momentum of the normalized gradient vectors to that of random unit vectors and accordingly gracefully increases or decreases the learning rate. The second one is a change point detection test, aimed at the detection of catastrophic learning episodes; upon its triggering the learning rate is instantly halved. Both abilities of speeding up and slowing down the learning rate allows the proposed approach, called SALeRA, to learn as fast as possible but not faster. Experiments on standard benchmarks show that SALeRA performs well in practice, and compares favorably to the state of the art.
Liste complète des métadonnées

Littérature citée [26 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01634375
Contributeur : Marc Schoenauer <>
Soumis le : samedi 25 novembre 2017 - 15:53:50
Dernière modification le : mardi 17 avril 2018 - 09:05:05

Lien texte intégral

Identifiants

  • HAL Id : hal-01634375, version 1
  • ARXIV : 1709.01427

Citation

Alice Schoenauer-Sebag, Marc Schoenauer, Michèle Sebag. Stochastic Gradient Descent: Going As Fast As Possible But Not Faster. 2017. 〈hal-01634375〉

Partager

Métriques

Consultations de la notice

161