(S)GD over Diagonal Linear Networks: Implicit Regularisation, Large Stepsizes and Edge of Stability - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Article Dans Une Revue Advances in Neural Information Processing Systems Année : 2023

(S)GD over Diagonal Linear Networks: Implicit Regularisation, Large Stepsizes and Edge of Stability

Résumé

In this paper, we investigate the impact of stochasticity and large stepsizes on the implicit regularisation of gradient descent (GD) and stochastic gradient descent (SGD) over diagonal linear networks. We prove the convergence of GD and SGD with macroscopic stepsizes in an overparametrised regression setting and characterise their solutions through an implicit regularisation problem. Our crisp characterisation leads to qualitative insights about the impact of stochasticity and stepsizes on the recovered solution. Specifically, we show that large stepsizes consistently benefit SGD for sparse regression problems, while they can hinder the recovery of sparse solutions for GD. These effects are magnified for stepsizes in a tight window just below the divergence threshold, in the "edge of stability" regime. Our findings are supported by experimental results.
Fichier principal
Vignette du fichier
2302.08982.pdf (4.75 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04435173 , version 1 (02-02-2024)

Licence

Paternité

Identifiants

Citer

Mathieu Even, Scott Pesme, Suriya Gunasekar, Nicolas Flammarion. (S)GD over Diagonal Linear Networks: Implicit Regularisation, Large Stepsizes and Edge of Stability. Advances in Neural Information Processing Systems, 2023. ⟨hal-04435173⟩
22 Consultations
3 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More