Compressive Learning with Privacy Guarantees - Archive ouverte HAL Access content directly
Journal Articles Information and Inference Year : 2021

Compressive Learning with Privacy Guarantees

(1) , (2) , (3) , (3) , (2) , (4, 5)
1
2
3
4
5

Abstract

This work addresses the problem of learning from large collections of data with privacy guarantees. The compressive learning framework proposes to deal with the large scale of datasets by compressing them into a single vector of generalized random moments, from which the learning task is then performed. We show that a simple perturbation of this mechanism with additive noise is sufficient to satisfy differential privacy, a well established formalism for defining and quantifying the privacy of a random mechanism. We combine this with a feature subsampling mechanism, which reduces the computational cost without damaging privacy. The framework is applied to the tasks of Gaussian modeling, k-means clustering and principal component analysis (PCA), for which sharp privacy bounds are derived. Empirically, the quality (for subsequent learning) of the compressed representation produced by our mechanism is strongly related with the induced noise level, for which we give analytical expressions.
Fichier principal
Vignette du fichier
journal_FINAL_VERSION_HAL.pdf (591.35 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-02496896 , version 1 (03-03-2020)
hal-02496896 , version 2 (20-01-2021)

Identifiers

Cite

Antoine Chatalic, Vincent Schellekens, Florimond Houssiau, Yves-Alexandre de Montjoye, Laurent Jacques, et al.. Compressive Learning with Privacy Guarantees. Information and Inference, 2021, ⟨10.1093/imaiai/iaab005⟩. ⟨hal-02496896v2⟩
347 View
675 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More