Skip to Main content Skip to Navigation
New interface
Conference papers

Accelerated greedy mixture learning

Abstract : Mixture probability densities are popular models that are used in several data mining and machine learning applications, e.g., clustering. A standard algorithm for learning such models from data is the Expectation-Maximization (EM) algorithm. However, EM can be slow with large datasets, and therefore approximation techniques are needed. In this paper we propose a variational approximation to the greedy EM algorithm which oers speedups that are at least linear in the number of data points. Moreover, by strictly increasing a lower bound on the data log-likelihood in every learning step, our algorithm guarantees convergence. We demonstrate the proposed algorithm on a synthetic experiment where satisfactory results are obtained.
Document type :
Conference papers
Complete list of metadata

Cited literature [11 references]  Display  Hide  Download
Contributor : Jakob Verbeek Connect in order to contact the contributor
Submitted on : Tuesday, April 5, 2011 - 2:55:51 PM
Last modification on : Monday, September 25, 2017 - 10:08:04 AM
Long-term archiving on: : Wednesday, July 6, 2011 - 2:57:28 AM


Files produced by the author(s)


  • HAL Id : inria-00321482, version 2


Jan Nunnink, Jakob Verbeek, Nikos Vlassis. Accelerated greedy mixture learning. Benelearn: Annual Machine Learning Conference of Belgium and the Netherlands, Jan 2004, Brussels, Belgium. ⟨inria-00321482v2⟩



Record views


Files downloads