Skip to Main content Skip to Navigation
Conference papers

Non-stationary approximate modified policy iteration

Boris Lesner 1 Bruno Scherrer 2, 3 
1 MAIA - Autonomous intelligent machine
Inria Nancy - Grand Est, LORIA - AIS - Department of Complex Systems, Artificial Intelligence & Robotics
2 BIGS - Biology, genetics and statistics
Inria Nancy - Grand Est, IECL - Institut Élie Cartan de Lorraine
Abstract : We consider the infinite-horizon γ-discounted optimal control problem formalized by Markov Decision Processes. Running any instance of Modified Policy Iteration—a family of algorithms that can interpolate between Value and Policy Iteration—with an error at each iteration is known to lead to stationary policies that are at least 2γ/(1−γ)^2-optimal. Variations of Value and Policy Iteration, that build l-periodic non-stationary policies, have recently been shown to display a better 2γ/((1−γ)(1−γ^l))-optimality guarantee. We describe a new algorithmic scheme, Non-Stationary Modified Policy Iteration, a family of algorithms parameterized by two integers m ≥ 0 and l ≥ 1 that generalizes all the above mentionned algorithms. While m allows one to interpolate between Value-Iteration-style and Policy-Iteration-style updates, l specifies the period of the non-stationary policy that is output. We show that this new family of algorithms also enjoys the improved 2γ/((1−γ)(1−γ))-optimality guarantee. Perhaps more importantly, we show, by exhibiting an original problem instance, that this guarantee is tight for all m and l; this tightness was to our knowledge only known in two specific cases, Value Iteration (m = 0, l = 1) and Policy Iteration (m = ∞, l = 1).
Complete list of metadata
Contributor : Bruno Scherrer Connect in order to contact the contributor
Submitted on : Tuesday, August 25, 2015 - 2:11:01 PM
Last modification on : Monday, July 25, 2022 - 3:44:15 AM
Long-term archiving on: : Wednesday, April 26, 2017 - 10:27:34 AM


Files produced by the author(s)


  • HAL Id : hal-01186664, version 1


Boris Lesner, Bruno Scherrer. Non-stationary approximate modified policy iteration. ICML 2015, Jul 2015, Lille, France. ⟨hal-01186664⟩



Record views


Files downloads