Large Deviation Principle for a Markov Chain with a Countable State Space - Archive ouverte HAL Access content directly
Reports (Research Report) Year : 1998

Large Deviation Principle for a Markov Chain with a Countable State Space

Abstract

Let E be a denumerable state space, X be an homogeneous Markov chain on E with kernel P. Then the chain X verifies a weak Sanov's theorem, i.e. a weak large deviation principle holds for the law of the pair empirical measure. In our opinion this is an improvement with respect to the existing literature since LDP in the Markov case requires in general, either E to be finite, or strong uniformity conditions, which important classes of chains do not verify, e.g. bounded jump networks. Moreover this LDP holds for any discrete state space Markov chain including non-ergodic chains.
Fichier principal
Vignette du fichier
RR-3503.pdf (708.4 Ko) Télécharger le fichier

Dates and versions

inria-00073182 , version 1 (24-05-2006)

Identifiers

  • HAL Id : inria-00073182 , version 1

Cite

Arnaud de La Fortelle, Guy Fayolle. Large Deviation Principle for a Markov Chain with a Countable State Space. [Research Report] RR-3503, INRIA. 1998. ⟨inria-00073182⟩
146 View
519 Download

Share

Gmail Facebook Twitter LinkedIn More