On the Use of Non-Stationary Policies for Infinite-Horizon Discounted Markov Decision Processes

Bruno Scherrer 1
1 MAIA - Autonomous intelligent machine
Inria Nancy - Grand Est, LORIA - AIS - Department of Complex Systems, Artificial Intelligence & Robotics
Abstract : We consider infinite-horizon $\gamma$-discounted Markov Decision Processes, for which it is known that there exists a stationary optimal policy. We consider the algorithm Value Iteration and the sequence of policies $\pi_1,\dots,\pi_k$ it implicitely generates until some iteration $k$. We provide performance bounds for non-stationary policies involving the last $m$ generated policies that reduce the state-of-the-art bound for the last stationary policy $\pi_k$ by a factor $\frac{1-\gamma}{1-\gamma^m}$. In particular, the use of non-stationary policies allows to reduce the usual asymptotic performance bounds of Value Iteration with errors bounded by $\epsilon$ at each iteration from $\frac{\gamma}{(1-\gamma)^2}\epsilon$ to $\frac{\gamma}{1-\gamma}\epsilon$, which is significant in the usual situation when $\gamma$ is close to $1$. Given Bellman operators that can only be computed with some error $\epsilon$, a surprising consequence of this result is that the problem of ''computing an approximately optimal non-stationary policy'' is much simpler than that of ''computing an approximately optimal stationary policy'', and even slightly simpler than that of ''approximately computing the value of some fixed policy'', since this last problem only has a guarantee of $\frac{1}{1-\gamma}\epsilon$.
Document type :
Reports
Complete list of metadatas

Cited literature [2 references]  Display  Hide  Download

https://hal.inria.fr/hal-00682172
Contributor : Bruno Scherrer <>
Submitted on : Friday, March 30, 2012 - 4:39:29 PM
Last modification on : Tuesday, December 18, 2018 - 4:40:21 PM
Long-term archiving on : Wednesday, December 14, 2016 - 7:11:48 PM

Files

nonstationary.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00682172, version 2
  • ARXIV : 1203.5532

Collections

Citation

Bruno Scherrer. On the Use of Non-Stationary Policies for Infinite-Horizon Discounted Markov Decision Processes. [Research Report] 2012. ⟨hal-00682172v2⟩

Share

Metrics

Record views

247

Files downloads

154