Accelerated greedy mixture learning

Abstract : Mixture probability densities are popular models that are used in several data mining and machine learning applications, e.g., clustering. A standard algorithm for learning such models from data is the Expectation-Maximization (EM) algorithm. However, EM can be slow with large datasets, and therefore approximation techniques are needed. In this paper we propose a variational approximation to the greedy EM algorithm which oers speedups that are at least linear in the number of data points. Moreover, by strictly increasing a lower bound on the data log-likelihood in every learning step, our algorithm guarantees convergence. We demonstrate the proposed algorithm on a synthetic experiment where satisfactory results are obtained.
Document type :
Conference papers
Complete list of metadatas

Cited literature [11 references]  Display  Hide  Download


https://hal.inria.fr/inria-00321482
Contributor : Jakob Verbeek <>
Submitted on : Tuesday, April 5, 2011 - 2:55:51 PM
Last modification on : Monday, September 25, 2017 - 10:08:04 AM
Long-term archiving on : Wednesday, July 6, 2011 - 2:57:28 AM

Files

verbeek04bnl.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : inria-00321482, version 2

Citation

Jan Nunnink, Jakob Verbeek, Nikos Vlassis. Accelerated greedy mixture learning. Benelearn: Annual Machine Learning Conference of Belgium and the Netherlands, Jan 2004, Brussels, Belgium. ⟨inria-00321482v2⟩

Share

Metrics

Record views

167

Files downloads

292