Skip to Main content Skip to Navigation
New interface
Journal articles

Simpler PAC-Bayesian Bounds for Hostile Data

Pierre Alquier 1, 2 Benjamin Guedj 3 
3 MODAL - MOdel for Data Analysis and Learning
LPP - Laboratoire Paul Painlevé - UMR 8524, Université de Lille, Sciences et Technologies, Inria Lille - Nord Europe, METRICS - Evaluation des technologies de santé et des pratiques médicales - ULR 2694, Polytech Lille - École polytechnique universitaire de Lille
Abstract : PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution $\rho$ to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution $\pi$. Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as \emph{hostile data}). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csisz\'ar's $f$-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.
Complete list of metadata

Cited literature [55 references]  Display  Hide  Download

https://hal.inria.fr/hal-01385064
Contributor : Benjamin Guedj Connect in order to contact the contributor
Submitted on : Thursday, May 23, 2019 - 7:57:05 AM
Last modification on : Tuesday, November 22, 2022 - 2:26:15 PM

File

main.pdf
Files produced by the author(s)

Identifiers

Citation

Pierre Alquier, Benjamin Guedj. Simpler PAC-Bayesian Bounds for Hostile Data. Machine Learning, 2018, 107 (5), pp.887-902. ⟨10.1007/s10994-017-5690-0⟩. ⟨hal-01385064v3⟩

Share

Metrics

Record views

341

Files downloads

246