Skip to Main content Skip to Navigation
Conference papers

Rolling horizon and state space truncation approximations for zero-sum semi-Markov games with discounted payoff

Abstract : We study the behaviour of the rolling horizon procedure for the case of two-person zero-sum semi-Markov game, with infinite horizon discounted case, when the state space is a borelian set and the action spaces are considered compact. We show geometrical convergence of the rewards produced by the rolling horizon policies to the optimal reward function. The approach is based on extensions of the results of Hernandez-Lerma and Lasserre (IEEE Trans. Automatic Control, 1990) for the Markov decision process case and Chang and Marcus (IEEE Trans. Automatic Control, 2003) for the case of Markov games, both in discrete time. Based on results from Tidball and Altman (J. Control and Optimization, 1996), we also study the convergence of the rewards of the policies given by an approximate rolling horizon procedure, and some approximations of the finite horizon value functions, useful to compute the approximate rolling horizon policies mentioned above. As a particular case, all the results apply for the case of continuous time both Markov decision processes and Markov games.
Document type :
Conference papers
Complete list of metadata

https://hal.inria.fr/hal-00863085
Contributor : Alain Jean-Marie <>
Submitted on : Wednesday, September 18, 2013 - 11:15:39 AM
Last modification on : Tuesday, November 13, 2018 - 2:38:01 AM

Identifiers

  • HAL Id : hal-00863085, version 1

Collections

Citation

Eugenio Della Vecchia, Silvia C. Di Marco, Alain Jean-Marie. Rolling horizon and state space truncation approximations for zero-sum semi-Markov games with discounted payoff. INFORMS Applied Probability Society Conference, Tom Britton, Henrik Hult, Ingemar Kaj, and Filip Lindskog, Jul 2011, Stockholm, Sweden. ⟨hal-00863085⟩

Share

Metrics

Record views

250