Efficient greedy learning of Gaussian mixtures

Abstract : We present a deterministic greedy method to learn a mixture of Gaussians. The key element is that we build-up the mixture component-wise: we start with one component and then add new components one at a time and update the mixtures in between the component insertions. Instead of solving directly a optimization problem involving the parameters of all components, we replace the problem by a sequence of component allocation problems involving only the parameters of the new component. Included are experimental results obtained from extensive tests on artificially generated data sets. The new learning method is compared with the standard EM with random initializations approach as well as to other existing approaches to learning Gaussian mixtures.
Document type :
Conference papers
Complete list of metadatas


https://hal.inria.fr/inria-00321510
Contributor : Jakob Verbeek <>
Submitted on : Wednesday, February 16, 2011 - 5:05:52 PM
Last modification on : Monday, September 25, 2017 - 10:08:04 AM
Long-term archiving on : Tuesday, May 17, 2011 - 2:36:15 AM

Files

verbeek01bnaic.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : inria-00321510, version 1

Citation

Jakob Verbeek, Nikos Vlassis, Ben Krose. Efficient greedy learning of Gaussian mixtures. The 13th Belgian-Dutch Conference on Artificial Intelligence (BNAIC'01), Oct 2001, Amsterdam, Netherlands. pp.251--258. ⟨inria-00321510⟩

Share

Metrics

Record views

146

Files downloads

243