Skip to Main content Skip to Navigation
Conference papers

Deep Equals Shallow for ReLU Networks in Kernel Regimes

Alberto Bietti 1, 2 Francis Bach 3
1 Thoth - Apprentissage de modèles à partir de données massives
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann
3 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
Abstract : Deep networks are often considered to be more expressive than shallow ones in terms of approximation. Indeed, certain functions can be approximated by deep networks provably more efficiently than by shallow ones, however, no tractable algorithms are known for learning such deep models. Separately, a recent line of work has shown that deep networks trained with gradient descent may behave like (tractable) kernel methods in a certain over-parameterized regime, where the kernel is determined by the architecture and initialization, and this paper focuses on approximation for such kernels. We show that for ReLU activations, the kernels derived from deep fully-connected networks have essentially the same approximation properties as their "shallow" two-layer counterpart, namely the same eigenvalue decay for the corresponding integral operator. This highlights the limitations of the kernel framework for understanding the benefits of such deep architectures. Our main theoretical result relies on characterizing such eigenvalue decays through differentiability properties of the kernel function, which also easily applies to the study of other kernels defined on the sphere.
Complete list of metadata

https://hal.inria.fr/hal-02963250
Contributor : Alberto Bietti <>
Submitted on : Wednesday, March 17, 2021 - 11:29:12 PM
Last modification on : Thursday, April 8, 2021 - 10:29:46 AM

File

main.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02963250, version 2
  • ARXIV : 2009.14397

Collections

Citation

Alberto Bietti, Francis Bach. Deep Equals Shallow for ReLU Networks in Kernel Regimes. ICLR 2021 - International Conference on Learning Representations, May 2021, Virtual, Austria. pp.1-22. ⟨hal-02963250v2⟩

Share

Metrics

Record views

1013

Files downloads

295