Improved and Generalized Upper Bounds on the Complexity of Policy Iteration

Bruno Scherrer 1
1 MAIA - Autonomous intelligent machine
Inria Nancy - Grand Est, LORIA - AIS - Department of Complex Systems, Artificial Intelligence & Robotics
Abstract : Given a Markov Decision Process (MDP) with $n$ states and $m$ actions per state, we study the number of iterations needed by Policy Iteration (PI) algorithms to converge to the optimal $\gamma$-discounted optimal policy. We consider two variations of PI: Howard's PI that changes the actions in all states with a positive advantage, and Simplex-PI that only changes the action in the state with maximal advantage. We show that Howard's PI terminates after at most $ O \left( \frac{ n m}{1-\gamma} \log \left( \frac{1}{1-\gamma} \right)\right) $ iterations, improving by a factor $O(\log n)$ a result by Hansen et al. (2013), while Simplex-PI terminates after at most $ O \left( \frac{n^2 m}{1-\gamma} \log \left( \frac{1}{1-\gamma} \right)\right) $ iterations, improving by a factor $O(\log n)$ a result by Ye (2011). Under some structural assumptions of the MDP, we then consider bounds that are independent of the discount factor~$\gamma$: given a measure of the maximal transient time $\tau_t$ and the maximal time $\tau_r$ to revisit states in recurrent classes under all policies, we show that Simplex-PI terminates after at most $ \tilde O \left( n^3 m^2 \tau_t \tau_r \right) $ iterations. This generalizes a recent result for deterministic MDPs by Post & Ye (2012), in which $\tau_t \le n$ and $\tau_r \le n$. We explain why similar results seem hard to derive for Howard's PI. Finally, under the additional (restrictive) assumption that the state space is partitioned in two sets, respectively states that are transient and recurrent for all policies, we show that Simplex-PI and Howard's PI terminate after at most $ \tilde O(nm (\tau_t+\tau_r))$ iterations.
Type de document :
Communication dans un congrès
Neural Information Processing Systems (NIPS) 2013, Dec 2013, South Lake Tahoe, United States. 2013
Liste complète des métadonnées

https://hal.inria.fr/hal-00921261
Contributeur : Bruno Scherrer <>
Soumis le : vendredi 20 décembre 2013 - 10:24:32
Dernière modification le : jeudi 11 janvier 2018 - 06:25:23
Document(s) archivé(s) le : vendredi 21 mars 2014 - 00:40:10

Fichiers

nips2013.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-00921261, version 1

Citation

Bruno Scherrer. Improved and Generalized Upper Bounds on the Complexity of Policy Iteration. Neural Information Processing Systems (NIPS) 2013, Dec 2013, South Lake Tahoe, United States. 2013. 〈hal-00921261〉

Partager

Métriques

Consultations de la notice

471

Téléchargements de fichiers

249