Large Deviation Principle for Markov Chains in Continuous Time

Abstract : Let $E$ be a denumerable state space, $Y_t$ be an homogeneous Markov process on $E$ with generator $R$. We introduce the \em empirical generator $G_t$ of $Y_t$, and prove strong LDP local bounds for it. This allows to prove the weak LDP in a very general setting, for irreducible non-explosive Markov processes, not necessarily ergodic. Sanov's theorem is obtained by a contraction argument from the weak LDP for $G_t$. In our opinion this is an improvement with respect to the existing literature, since LDP in the Markov case requires in general, either $E$ to be finite, or strong uniformity conditions, which important classes of chains do not verify, e.g. bounded jump networks. Moreover the empirical generator together with the representation of the rate function as an entropy allow to prove nice properties (uniqueness, continuity, convexity). It also leads to applications in simulation (importance sampling) and in the evaluation of the rate function for sample path LDP in networks. Finally it seems that some technical problems can be reduced to convex programs which can be run with fast algorithms.
Type de document :
Rapport
[Research Report] RR-3877, INRIA. 2000
Liste complète des métadonnées

https://hal.inria.fr/inria-00072776
Contributeur : Rapport de Recherche Inria <>
Soumis le : mercredi 24 mai 2006 - 10:51:53
Dernière modification le : samedi 17 septembre 2016 - 01:30:01
Document(s) archivé(s) le : dimanche 4 avril 2010 - 23:21:54

Fichiers

Identifiants

  • HAL Id : inria-00072776, version 1

Collections

Citation

Arnaud De La Fortelle. Large Deviation Principle for Markov Chains in Continuous Time. [Research Report] RR-3877, INRIA. 2000. 〈inria-00072776〉

Partager

Métriques

Consultations de la notice

227

Téléchargements de fichiers

179