Wasserstein PAC-Bayes Learning: Exploiting Optimisation Guarantees to Explain Generalisation - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Preprints, Working Papers, ... Year : 2023

Wasserstein PAC-Bayes Learning: Exploiting Optimisation Guarantees to Explain Generalisation

Abstract

PAC-Bayes learning is an established framework to both assess the generalisation ability of learning algorithms, and design new learning algorithm by exploiting generalisation bounds as training objectives. Most of the exisiting bounds involve a \emph{Kullback-Leibler} (KL) divergence, which fails to capture the geometric properties of the loss function which are often useful in optimisation. We address this by extending the emerging \emph{Wasserstein PAC-Bayes} theory. We develop new PAC-Bayes bounds with Wasserstein distances replacing the usual KL, and demonstrate that sound optimisation guarantees translate to good generalisation abilities. In particular we provide generalisation bounds for the \emph{Bures-Wasserstein SGD} by exploiting its optimisation properties.
Fichier principal
Vignette du fichier
main.pdf (505.04 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-04080080 , version 1 (24-04-2023)
hal-04080080 , version 2 (30-05-2023)

Identifiers

Cite

Maxime Haddouche, Benjamin Guedj. Wasserstein PAC-Bayes Learning: Exploiting Optimisation Guarantees to Explain Generalisation. 2023. ⟨hal-04080080v2⟩
37 View
91 Download

Altmetric

Share

Gmail Facebook X LinkedIn More