Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Sketching for Large-Scale Learning of Mixture Models

Abstract : Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. We propose a "compressive learning'' framework where we first sketch the data by computing random generalized moments of the underlying probability distribution, then estimate mixture model parameters from the sketch using an iterative algorithm analogous to greedy sparse signal recovery. We exemplify our framework with the sketched estimation of Gaussian Mixture Models (GMMs). We experimentally show that our approach yields results comparable to the classical Expectation-Maximization (EM) technique while requiring significantly less memory and fewer computations when the number of database elements is large. We report large-scale experiments in speaker verification, where our approach makes it possible to fully exploit a corpus of 1000 hours of speech signal to learn a universal background model at scales computationally inaccessible to EM.
Complete list of metadatas
Contributor : Nicolas Keriven <>
Submitted on : Thursday, October 1, 2015 - 5:01:24 PM
Last modification on : Wednesday, February 19, 2020 - 6:40:04 PM
Long-term archiving on: : Saturday, January 2, 2016 - 11:24:33 AM


Files produced by the author(s)


  • HAL Id : hal-01208027, version 1


Nicolas Keriven, Anthony Bourrier, Rémi Gribonval, Patrick Pérez. Sketching for Large-Scale Learning of Mixture Models. 2015. ⟨hal-01208027v1⟩



Record views


Files downloads