Optimally Solving Dec-POMDPs as Continuous-State MDPs - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2013

Optimally Solving Dec-POMDPs as Continuous-State MDPs

Résumé

Optimally solving decentralized partially observable Markov decision processes (Dec-POMDPs) is a hard combinatorial problem. Current algorithms search through the space of full histories for each agent. Because of the doubly exponential growth in the number of policies in this space as the planning horizon increases, these methods quickly become intractable. However, in real world problems, computing policies over the full history space is often unnecessary. True histories experienced by the agents often lie near a structured, low-dimensional manifold embedded into the history space. We show that by transforming a Dec-POMDP into a continuous-state MDP, we are able to find and exploit these low-dimensional representations. Using this novel transformation, we can then apply powerful techniques for solving POMDPs and continuous-state MDPs. By combining a general search algorithm and dimension reduction based on feature selection, we introduce a novel approach to optimally solve problems with significantly longer planning horizons than previous methods.
Fichier principal
Vignette du fichier
ijcai13b.pdf (4.01 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-00907338 , version 1 (21-11-2013)

Identifiants

  • HAL Id : hal-00907338 , version 1

Citer

Jilles Steeve Dibangoye, Christopher Amato, Olivier Buffet, François Charpillet. Optimally Solving Dec-POMDPs as Continuous-State MDPs. IJCAI - 23rd International Joint Conference on Artificial Intelligence, Aug 2013, Pékin, China. ⟨hal-00907338⟩
208 Consultations
96 Téléchargements

Partager

Gmail Facebook X LinkedIn More