Simpler PAC-Bayesian Bounds for Hostile Data

Pierre Alquier 1, 2 Benjamin Guedj 3, 4, 5
5 MODAL - MOdel for Data Analysis and Learning
Inria Lille - Nord Europe, LPP - Laboratoire Paul Painlevé - UMR 8524, CERIM - Santé publique : épidémiologie et qualité des soins-EA 2694, Polytech Lille, Université de Lille 1, IUT’A
Abstract : PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution $\rho$ to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution $\pi$. Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as \emph{hostile data}). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csisz\'ar's $f$-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.
Type de document :
Article dans une revue
Machine Learning, Springer Verlag, A Paraître
Liste complète des métadonnées

Littérature citée [34 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01385064
Contributeur : Benjamin Guedj <>
Soumis le : dimanche 23 octobre 2016 - 18:16:13
Dernière modification le : jeudi 11 janvier 2018 - 06:23:18

Fichier

main.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01385064, version 2

Citation

Pierre Alquier, Benjamin Guedj. Simpler PAC-Bayesian Bounds for Hostile Data. Machine Learning, Springer Verlag, A Paraître. 〈hal-01385064v2〉

Partager

Métriques

Consultations de la notice

226

Téléchargements de fichiers

106