Rolling horizon and state space truncation approximations for zero-sum semi-Markov games with discounted payoff - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2011

Rolling horizon and state space truncation approximations for zero-sum semi-Markov games with discounted payoff

Résumé

We study the behaviour of the rolling horizon procedure for the case of two-person zero-sum semi-Markov game, with infinite horizon discounted case, when the state space is a borelian set and the action spaces are considered compact. We show geometrical convergence of the rewards produced by the rolling horizon policies to the optimal reward function. The approach is based on extensions of the results of Hernandez-Lerma and Lasserre (IEEE Trans. Automatic Control, 1990) for the Markov decision process case and Chang and Marcus (IEEE Trans. Automatic Control, 2003) for the case of Markov games, both in discrete time. Based on results from Tidball and Altman (J. Control and Optimization, 1996), we also study the convergence of the rewards of the policies given by an approximate rolling horizon procedure, and some approximations of the finite horizon value functions, useful to compute the approximate rolling horizon policies mentioned above. As a particular case, all the results apply for the case of continuous time both Markov decision processes and Markov games.
Fichier non déposé

Dates et versions

hal-00863085 , version 1 (18-09-2013)

Identifiants

  • HAL Id : hal-00863085 , version 1

Citer

Eugenio Della Vecchia, Silvia C. Di Marco, Alain Jean-Marie. Rolling horizon and state space truncation approximations for zero-sum semi-Markov games with discounted payoff. INFORMS Applied Probability Society Conference, Tom Britton, Henrik Hult, Ingemar Kaj, and Filip Lindskog, Jul 2011, Stockholm, Sweden. ⟨hal-00863085⟩
122 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More