A Kernel Perspective for Regularizing Deep Neural Networks

Alberto Bietti 1 Grégoire Mialon 1, 2 Dexiong Chen 1 Julien Mairal 1
1 Thoth - Apprentissage de modèles à partir de données massives
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann
2 SIERRA - Statistical Machine Learning and Parsimony
DI-ENS - Département d'informatique de l'École normale supérieure, CNRS - Centre National de la Recherche Scientifique, Inria de Paris
Abstract : We propose a new point of view for regularizing deep neural networks by using the norm of a reproducing kernel Hilbert space (RKHS). Even though this norm cannot be computed, it admits upper and lower approximations leading to various practical strategies. Specifically, this perspective (i) provides a common umbrella for many existing regularization principles, including spectral norm and gradient penalties, or adversarial training, (ii) leads to new effective regularization penalties, and (iii) suggests hybrid strategies combining lower and upper bounds to get better approximations of the RKHS norm. We experimentally show this approach to be effective when learning on small datasets, or to obtain adversarially robust models.
Complete list of metadatas

Cited literature [24 references]  Display  Hide  Download

https://hal.inria.fr/hal-01884632
Contributor : Alberto Bietti <>
Submitted on : Tuesday, May 14, 2019 - 2:20:36 PM
Last modification on : Monday, September 30, 2019 - 3:43:18 PM
Long-term archiving on : Thursday, October 10, 2019 - 7:11:56 AM

File

main.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01884632, version 4
  • ARXIV : 1810.00363

Collections

Citation

Alberto Bietti, Grégoire Mialon, Dexiong Chen, Julien Mairal. A Kernel Perspective for Regularizing Deep Neural Networks. ICML 2019 - 36th International Conference on Machine Learning, Jun 2019, Long Beach, CA, United States. pp.664-674. ⟨hal-01884632v4⟩

Share

Metrics

Record views

154

Files downloads

378