On Regularization and Robustness of Deep Neural Networks

Alberto Bietti 1 Grégoire Mialon 1 Julien Mairal 1
1 Thoth - Apprentissage de modèles à partir de données massives
LJK - Laboratoire Jean Kuntzmann, Inria Grenoble - Rhône-Alpes
Abstract : Despite their success, deep neural networks suffer from several drawbacks: they lack robustness to small changes of input data known as "adversarial examples" and training them with small amounts of annotated data is challenging. In this work, we study the connection between regularization and robustness by viewing neural networks as elements of a reproducing kernel Hilbert space (RKHS) of functions and by regularizing them using the RKHS norm. Even though this norm cannot be computed, we consider various approximations based on upper and lower bounds. These approximations lead to new strategies for regularization, but also to existing ones such as spectral norm penalties or constraints, gradient penalties, or adversarial training. Besides, the kernel framework allows us to obtain margin-based bounds on adversarial generalization. We study the obtained algorithms for learning on small datasets, learning adversarially robust models, and discuss implications for learning implicit generative models.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

https://hal.inria.fr/hal-01884632
Contributor : Alberto Bietti <>
Submitted on : Monday, October 1, 2018 - 11:29:21 AM
Last modification on : Friday, December 7, 2018 - 3:38:27 PM
Long-term archiving on : Wednesday, January 2, 2019 - 1:21:11 PM

File

main.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01884632, version 1
  • ARXIV : 1810.00363

Citation

Alberto Bietti, Grégoire Mialon, Julien Mairal. On Regularization and Robustness of Deep Neural Networks. 2018. ⟨hal-01884632v1⟩

Share

Metrics

Record views

251

Files downloads

67