On Lazy Training in Differentiable Programming - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2019

On Lazy Training in Differentiable Programming

Résumé

In a series of recent theoretical works, it was shown that strongly over-parameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying. In this work, we show that this "lazy training" phenomenon is not specific to over-parameterized neural networks, and is due to a choice of scaling, often implicit, that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels. Through a theoretical analysis, we exhibit various situations where this phenomenon arises in non-convex optimization and we provide bounds on the distance between the lazy and linearized optimization paths. Our numerical experiments bring a critical note, as we observe that the performance of commonly used non-linear deep convolutional neural networks in computer vision degrades when trained in the lazy regime. This makes it unlikely that "lazy training" is behind the many successes of neural networks in difficult high dimensional tasks.
Fichier principal
Vignette du fichier
lazy-main.pdf (822.99 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01945578 , version 1 (05-12-2018)
hal-01945578 , version 2 (11-12-2018)
hal-01945578 , version 3 (21-02-2019)
hal-01945578 , version 4 (08-06-2019)
hal-01945578 , version 5 (18-06-2019)
hal-01945578 , version 6 (07-01-2020)

Identifiants

Citer

Lenaic Chizat, Edouard Oyallon, Francis Bach. On Lazy Training in Differentiable Programming. 2019. ⟨hal-01945578v4⟩
5408 Consultations
4423 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More