Skip to Main content Skip to Navigation
Conference papers

On the Global Convergence of (Fast) Incremental Expectation Maximization Methods

Abstract : The EM algorithm is one of the most popular algorithm for inference in latent data models. The original formulation of the EM algorithm does not scale to large data set, because the whole data set is required at each iteration of the algorithm. To alleviate this problem, Neal and Hinton [1998] have proposed an incremental version of the EM (iEM) in which at each iteration the conditional expectation of the latent data (E-step) is updated only for a mini-batch of observations. Another approach has been proposed by Cappé and Moulines [2009] in which the E-step is replaced by a stochastic approximation step, closely related to stochastic gradient. In this paper, we analyze incremental and stochastic version of the EM algorithm as well as the variance reduced-version of [Chen et al., 2018] in a common unifying framework. We also introduce a new version incremental version, inspired by the SAGA algorithm by Defazio et al. [2014]. We establish non-asymptotic convergence bounds for global convergence. Numerical applications are presented in this article to illustrate our findings.
Complete list of metadata

Cited literature [11 references]  Display  Hide  Download

https://hal.inria.fr/hal-02334656
Contributor : Belhal Karimi <>
Submitted on : Monday, October 28, 2019 - 10:45:28 AM
Last modification on : Friday, April 30, 2021 - 10:03:19 AM
Long-term archiving on: : Wednesday, January 29, 2020 - 1:41:42 PM

File

fiem.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02334656, version 2

Citation

Belhal Karimi, Hoi-To Wai, Éric Moulines, Marc Lavielle. On the Global Convergence of (Fast) Incremental Expectation Maximization Methods. NeurIPS 2019 - 33th Annual Conference on Neural Information Processing Systems, Dec 2019, Vancouver, Canada. ⟨hal-02334656v2⟩

Share

Metrics

Record views

133

Files downloads

383