Skip to Main content Skip to Navigation
Conference papers

Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

Abstract : Stochastic approximation (SA) is a key method used in statistical learning. Recently, its non-asymptotic convergence analysis has been considered in many papers. However, most of the prior analyses are made under restrictive assumptions such as unbiased gradient estimates and convex objective function, which significantly limit their applications to sophisticated tasks such as online and reinforcement learning. These restrictions are all essentially relaxed in this work. In particular, we analyze a general SA scheme to minimize a non-convex, smooth objective function. We consider update procedure whose drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering approximate second-order method and allowing asymptotic bias for the one-step updates. We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.
Document type :
Conference papers
Complete list of metadata

Cited literature [16 references]  Display  Hide  Download
Contributor : Belhal Karimi Connect in order to contact the contributor
Submitted on : Monday, May 13, 2019 - 4:49:06 PM
Last modification on : Friday, April 30, 2021 - 10:03:19 AM


Files produced by the author(s)


  • HAL Id : hal-02127750, version 1


Belhal Karimi, Blazej Miasojedow, Éric Moulines, Hoi-To Wai. Non-asymptotic Analysis of Biased Stochastic Approximation Scheme. COLT 2019 - 32nd Annual Conference on Conference on Learning Theory, Jun 2019, Phoenix, United States. pp.1 - 33. ⟨hal-02127750⟩



Record views


Files downloads