ρ-POMDPs have Lipschitz-Continuous ϵ-Optimal Value Functions

Abstract : Many state-of-the-art algorithms for solving Partially Observable Markov Decision Processes (POMDPs) rely on turning the problem into a "fully observable" problem---a belief MDP---and exploiting the piece-wise linearity and convexity (PWLC) of the optimal value function in this new state space (the belief simplex ∆). This approach has been extended to solving ρ-POMDPs---i.e., for information-oriented criteria-when the reward ρ is convex in ∆. General ρ-POMDPs can also be turned into "fully observable" problems, but with no means to exploit the PWLC property. In this paper, we focus on POMDPs and ρ-POMDPs with λ ρ-Lipschitz reward function, and demonstrate that, for finite horizons, the optimal value function is Lipschitz-continuous. Then, value function approximators are proposed for both upper-and lower-bounding the optimal value function, which are shown to provide uniformly improvable bounds. This allows proposing two algorithms derived from HSVI which are empirically evaluated on various benchmark problems.
Document type :
Conference papers
Complete list of metadatas

Cited literature [30 references]  Display  Hide  Download

https://hal.inria.fr/hal-01903685
Contributor : Olivier Buffet <>
Submitted on : Wednesday, January 9, 2019 - 9:53:23 AM
Last modification on : Wednesday, April 3, 2019 - 1:23:15 AM

File

nips18-ext.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01903685, version 2

Citation

Mathieu Fehr, Olivier Buffet, Vincent Thomas, Jilles Steeve Dibangoye. ρ-POMDPs have Lipschitz-Continuous ϵ-Optimal Value Functions. NIPS 2018 - Thirty-second Conference on Neural Information Processing Systems, Dec 2018, Montréal, Canada. pp.1-27. ⟨hal-01903685v2⟩

Share

Metrics

Record views

133

Files downloads

256