Rolling horizon and state space truncation approximations for zero-sum semi-Markov games with discounted payoff

Abstract : We study the behaviour of the rolling horizon procedure for the case of two-person zero-sum semi-Markov game, with infinite horizon discounted case, when the state space is a borelian set and the action spaces are considered compact. We show geometrical convergence of the rewards produced by the rolling horizon policies to the optimal reward function. The approach is based on extensions of the results of Hernandez-Lerma and Lasserre (IEEE Trans. Automatic Control, 1990) for the Markov decision process case and Chang and Marcus (IEEE Trans. Automatic Control, 2003) for the case of Markov games, both in discrete time. Based on results from Tidball and Altman (J. Control and Optimization, 1996), we also study the convergence of the rewards of the policies given by an approximate rolling horizon procedure, and some approximations of the finite horizon value functions, useful to compute the approximate rolling horizon policies mentioned above. As a particular case, all the results apply for the case of continuous time both Markov decision processes and Markov games.
Type de document :
Communication dans un congrès
Henrik Hult, Kavita Ramanan, and Marty Reiman. INFORMS Applied Probability Society Conference, Jul 2011, Stockholm, Sweden. 2011
Liste complète des métadonnées

https://hal.inria.fr/hal-00863085
Contributeur : Alain Jean-Marie <>
Soumis le : mercredi 18 septembre 2013 - 11:15:39
Dernière modification le : samedi 27 janvier 2018 - 01:32:13

Identifiants

  • HAL Id : hal-00863085, version 1

Collections

Citation

Eugenio Della Vecchia, Silvia C. Di Marco, Alain Jean-Marie. Rolling horizon and state space truncation approximations for zero-sum semi-Markov games with discounted payoff. Henrik Hult, Kavita Ramanan, and Marty Reiman. INFORMS Applied Probability Society Conference, Jul 2011, Stockholm, Sweden. 2011. 〈hal-00863085〉

Partager

Métriques

Consultations de la notice

164