Policy iteration for perfect information stochastic mean payoff games with bounded first return times is strongly polynomial

Marianne Akian 1, 2 Stéphane Gaubert 1, 2
2 MAXPLUS - Max-plus algebras and mathematics of decision
CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique, Inria Saclay - Ile de France, X - École polytechnique, CNRS - Centre National de la Recherche Scientifique : UMR
Abstract : Recent results of Ye and Hansen, Miltersen and Zwick show that policy iteration for one or two player (perfect information) zero-sum stochastic games, restricted to instances with a fixed discount rate, is strongly polynomial. We show that policy iteration for mean-payoff zero-sum stochastic games is also strongly polynomial when restricted to instances with bounded first mean return time to a given state. The proof is based on methods of nonlinear Perron-Frobenius theory, allowing us to reduce the mean-payoff problem to a discounted problem with state dependent discount rate. Our analysis also shows that policy iteration remains strongly polynomial for discounted problems in which the discount rate can be state dependent (and even negative) at certain states, provided that the spectral radii of the nonnegative matrices associated to all strategies are bounded from above by a fixed constant strictly less than 1.
Type de document :
Pré-publication, Document de travail
Preprint arXiv:1310.4953. 2013
Liste complète des métadonnées

https://hal.inria.fr/hal-00881207
Contributeur : Marianne Akian <>
Soumis le : jeudi 7 novembre 2013 - 17:14:04
Dernière modification le : jeudi 10 mai 2018 - 02:05:57

Lien texte intégral

Identifiants

  • HAL Id : hal-00881207, version 1
  • ARXIV : 1310.4953

Collections

Citation

Marianne Akian, Stéphane Gaubert. Policy iteration for perfect information stochastic mean payoff games with bounded first return times is strongly polynomial. Preprint arXiv:1310.4953. 2013. 〈hal-00881207〉

Partager

Métriques

Consultations de la notice

429