ρ-POMDPs have Lipschitz-Continuous ϵ-Optimal Value Functions - Archive ouverte HAL Access content directly
Conference Papers Year :

ρ-POMDPs have Lipschitz-Continuous ϵ-Optimal Value Functions

(1) , (2) , (2) , (3)
1
2
3

Abstract

Many state-of-the-art algorithms for solving Partially Observable Markov Decision Processes (POMDPs) rely on turning the problem into a "fully observable" problem---a belief MDP---and exploiting the piece-wise linearity and convexity (PWLC) of the optimal value function in this new state space (the belief simplex ∆). This approach has been extended to solving ρ-POMDPs---i.e., for information-oriented criteria-when the reward ρ is convex in ∆. General ρ-POMDPs can also be turned into "fully observable" problems, but with no means to exploit the PWLC property. In this paper, we focus on POMDPs and ρ-POMDPs with λ ρ-Lipschitz reward function, and demonstrate that, for finite horizons, the optimal value function is Lipschitz-continuous. Then, value function approximators are proposed for both upper-and lower-bounding the optimal value function, which are shown to provide uniformly improvable bounds. This allows proposing two algorithms derived from HSVI which are empirically evaluated on various benchmark problems.
Fichier principal
Vignette du fichier
nips18-ext.pdf (587.1 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01903685 , version 1 (24-10-2018)
hal-01903685 , version 2 (09-01-2019)

Identifiers

  • HAL Id : hal-01903685 , version 2

Cite

Mathieu Fehr, Olivier Buffet, Vincent Thomas, Jilles Dibangoye. ρ-POMDPs have Lipschitz-Continuous ϵ-Optimal Value Functions. NIPS 2018 - Thirty-second Conference on Neural Information Processing Systems, Dec 2018, Montréal, Canada. pp.1-27. ⟨hal-01903685v2⟩
431 View
325 Download

Share

Gmail Facebook Twitter LinkedIn More