Monte Carlo Information-Oriented Planning - Archive ouverte HAL Access content directly
Conference Papers Year :

Monte Carlo Information-Oriented Planning

Planification Monte Carlo orientée information.

(1) , (2) , (1)
1
2

Abstract

In this article, we discuss how to solve information-gathering problems expressed as ρ-POMDPs, an extension of Partially Observable Markov Decision Processes (POMDPs) whose reward ρ depends on the belief state. Point-based approaches used for solving POMDPs have been extended to solving ρ-POMDPs as belief MDPs when its reward ρ is convex in B or when it is Lipschitz-continuous. In the present paper, we build on the POMCP algorithm to propose a Monte Carlo Tree Search for ρ-POMDPs, aiming for an efficient on-line planner which can be used for any ρ function. Adaptations are required due to the belief-dependent rewards to (i) propagate more than one state at a time, and (ii) prevent biases in value estimates. An asymptotic convergence proof to-optimal values is given when ρ is continuous. Experiments are conducted to analyze the algorithms at hand and show that they outperform myopic approaches.
Fichier principal
Vignette du fichier
ecai_2020.pdf (354.33 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02943028 , version 1 (18-09-2020)

Identifiers

  • HAL Id : hal-02943028 , version 1

Cite

Vincent Thomas, Gérémy Hutin, Olivier Buffet. Monte Carlo Information-Oriented Planning. 24th ECAI 2020 - European Conference on Artificial Intelligence, Aug 2020, Santiago de Compostela, Spain. ⟨hal-02943028⟩
67 View
152 Download

Share

Gmail Facebook Twitter LinkedIn More