Skip to Main content Skip to Navigation
Conference papers

The steady-state control problem for Markov decision processes

Abstract : This paper addresses a control problem for probabilistic models in the setting of Markov decision processes (\MDP). We are interested in the \emph{steady-state control problem} which asks, given an ergodic \MDP\ $\mathcal M$ and a distribution $\delta_{goal}$, whether there exists a (history-dependent randomized) policy $\pi$ ensuring that the steady-state distribution of $\mathcal M$ under $\pi$ is exactly $\delta_{goal}$. We first show that stationary randomized policies suffice to achieve a given steady-state distribution. Then we infer that the steady-state control problem is decidable for \MDP, and can be represented as a linear program which is solvable in PTIME. This decidability result extends to labeled \MDP\ (\LMDP) where the objective is a steady-state distribution on labels carried by the states, and we provide a PSPACE algorithm. We also show that a related \emph{steady-state language inclusion problem} is decidable in EXPTIME for \LMDP. Finally, we prove that if we consider \MDP\ under partial observation (\POMDP), the steady-state control problem becomes undecidable.
Document type :
Conference papers
Complete list of metadata

Cited literature [11 references]  Display  Hide  Download
Contributor : Loic Helouet Connect in order to contact the contributor
Submitted on : Saturday, November 2, 2013 - 10:21:32 PM
Last modification on : Thursday, January 20, 2022 - 5:33:32 PM
Long-term archiving on: : Monday, February 3, 2014 - 4:26:51 AM


Files produced by the author(s)


  • HAL Id : hal-00879355, version 1


Sundararaman Akshay, Nathalie Bertrand, Serge Haddad, Loïc Hélouët. The steady-state control problem for Markov decision processes. Qest 2013, Sep 2013, Buenos Aires, Argentina. pp.290-304. ⟨hal-00879355⟩



Record views


Files downloads