Skip to Main content Skip to Navigation
Conference papers

Monte Carlo Information-Oriented Planning

Abstract : In this article, we discuss how to solve information-gathering problems expressed as ρ-POMDPs, an extension of Partially Observable Markov Decision Processes (POMDPs) whose reward ρ depends on the belief state. Point-based approaches used for solving POMDPs have been extended to solving ρ-POMDPs as belief MDPs when its reward ρ is convex in B or when it is Lipschitz-continuous. In the present paper, we build on the POMCP algorithm to propose a Monte Carlo Tree Search for ρ-POMDPs, aiming for an efficient on-line planner which can be used for any ρ function. Adaptations are required due to the belief-dependent rewards to (i) propagate more than one state at a time, and (ii) prevent biases in value estimates. An asymptotic convergence proof to-optimal values is given when ρ is continuous. Experiments are conducted to analyze the algorithms at hand and show that they outperform myopic approaches.
Complete list of metadatas

Cited literature [26 references]  Display  Hide  Download

https://hal.inria.fr/hal-02943028
Contributor : Vincent Thomas <>
Submitted on : Friday, September 18, 2020 - 3:09:49 PM
Last modification on : Wednesday, October 14, 2020 - 4:11:46 AM

File

ecai_2020.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02943028, version 1

Collections

Citation

Vincent Thomas, Gérémy Hutin, Olivier Buffet. Monte Carlo Information-Oriented Planning. 24th ECAI 2020 - European Conference on Artificial Intelligence, Aug 2020, Santiago de Compostela, Spain. ⟨hal-02943028⟩

Share

Metrics

Record views

19

Files downloads

82