Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Sketching Datasets for Large-Scale Learning (long version)

Abstract : This article considers "sketched learning," or "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e.g., clustering, classification, or regression) is performed. In particular, a "sketch" is first constructed by computing carefully chosen nonlinear random features (e.g., random Fourier features) and averaging them over the whole dataset. Parameters are then learned from the sketch, without access to the original dataset. This article surveys the current state-of-the-art in sketched learning, including the main concepts and algorithms, their connections with established signal-processing methods, existing theoretical guarantees-on both information preservation and privacy preservation, and important open problems.
Document type :
Preprints, Working Papers, ...
Complete list of metadata
Contributor : Rémi Gribonval Connect in order to contact the contributor
Submitted on : Tuesday, January 26, 2021 - 5:42:55 PM
Last modification on : Monday, January 10, 2022 - 2:56:02 PM


  • HAL Id : hal-02909766, version 2
  • ARXIV : 2008.01839



Rémi Gribonval, Antoine Chatalic, Nicolas Keriven, Vincent Schellekens, Laurent Jacques, et al.. Sketching Datasets for Large-Scale Learning (long version). 2021. ⟨hal-02909766v2⟩



Les métriques sont temporairement indisponibles