Skip to Main content Skip to Navigation
New interface
Conference papers

Progress in Self-Certified Neural Networks

Maria Perez-Ortiz 1 Omar Rivasplata 1 Emilio Parrado-Hernandez 2 Benjamin Guedj 3, 4, 5, 6, 1 John Shawe-Taylor 1 
4 MODAL - MOdel for Data Analysis and Learning
LPP - Laboratoire Paul Painlevé - UMR 8524, Université de Lille, Sciences et Technologies, Inria Lille - Nord Europe, METRICS - Evaluation des technologies de santé et des pratiques médicales - ULR 2694, Polytech Lille - École polytechnique universitaire de Lille
Abstract : A learning method is self-certified if it uses all available data to simultaneously learn a predictor and certify its quality with a statistical certificate that is valid on unseen data. Recent work has shown that neural network models trained by optimising PAC-Bayes bounds lead not only to accurate predictors, but also to tight risk certificates, bearing promise towards achieving self-certified learning. In this context, learning and certification strategies based on PAC-Bayes bounds are especially attractive due to their ability to leverage all data to learn a posterior and simultaneously certify its risk. In this paper, we assess the progress towards self-certification in probabilistic neural networks learnt by PAC-Bayes inspired objectives. We empirically compare (on 4 classification datasets) classical test set bounds for deterministic predictors and a PAC-Bayes bound for randomised self-certified predictors. We first show that both of these generalisation bounds are not too far from out-of-sample test set errors. We then show that in data starvation regimes, holding out data for the test set bounds adversely affects generalisation performance, while self-certified strategies based on PAC-Bayes bounds do not suffer from this drawback, proving that they might be a suitable choice for the small data regime. We also find that probabilistic neural networks learnt by PAC-Bayes inspired objectives lead to certificates that can be surprisingly competitive with commonly used test set bounds.
Complete list of metadata

https://hal.inria.fr/hal-03430821
Contributor : Benjamin Guedj Connect in order to contact the contributor
Submitted on : Tuesday, November 16, 2021 - 1:48:27 PM
Last modification on : Tuesday, December 6, 2022 - 12:42:13 PM

File

2111.07737.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03430821, version 1
  • ARXIV : 2111.07737

Collections

Citation

Maria Perez-Ortiz, Omar Rivasplata, Emilio Parrado-Hernandez, Benjamin Guedj, John Shawe-Taylor. Progress in Self-Certified Neural Networks. NeurIPS 2021 - Conference on Neural Information Processing Systems. Session Workshop : Bayesian Deep Learning, Dec 2021, Virtual, United Kingdom. ⟨hal-03430821⟩

Share

Metrics

Record views

20

Files downloads

15