C. Amato, G. Chowdhary, A. Geramifard, N. K. Ure, and M. J. Kochenderfer, Decentralized control of partially observable Markov decision processes, 52nd IEEE Conference on Decision and Control, 2013.
DOI : 10.1109/CDC.2013.6760239

C. Amato, J. S. Dibangoye, and S. Zilberstein, Incremental policy generation for finite-horizon DEC-POMDPs, ICAPS, 2009.

R. Aras and A. Dutech, An investigation into mathematical programming for finite horizon decentralized POMDPs, JAIR, vol.37, pp.329-396, 2010.
URL : https://hal.archives-ouvertes.fr/inria-00439627

R. Becker, S. Zilberstein, V. R. Lesser, and C. V. Goldman, Solving transition independent decentralized Markov decision processes, JAIR, vol.22, pp.423-455, 2004.

D. S. Bernstein, C. Amato, E. A. Hansen, and S. Zilberstein, Policy iteration for decentralized control of Markov decision processes, JAIR, vol.34, pp.89-132, 2009.

D. S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein, The Complexity of Decentralized Control of Markov Decision Processes, Mathematics of Operations Research, vol.27, issue.4, 2002.
DOI : 10.1287/moor.27.4.819.297

A. Boularias and B. Chaib-draa, Exact dynamic programming for decentralized POMDPs with lossless policy compression, ICAPS, pp.20-27, 2008.

C. Boutilier, R. Dearden, and M. Goldszmidt, Stochastic dynamic programming with factored representations, Artificial Intelligence, vol.121, issue.1-2, pp.49-107, 2000.
DOI : 10.1016/S0004-3702(00)00033-3

S. De-givry, F. Heras, M. Zytnicki, and J. Larrosa, Existential arc consistency: Getting closer to full arc consistency in weighted CSPs, IJCAI, pp.84-89, 2005.

R. Dechter, Bucket elimination: a unifying framework for processing hard and soft constraints, ACM Computing Surveys, vol.28, issue.4es, pp.51-55, 1997.
DOI : 10.1145/242224.242302

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.9602

R. Dechter, Bucket elimination: A unifying framework for reasoning, Artificial Intelligence, vol.113, issue.1-2, pp.41-85, 1999.
DOI : 10.1016/S0004-3702(99)00059-4

URL : http://doi.org/10.1016/s0004-3702(99)00059-4

J. S. Dibangoye, C. Amato, O. Buffet, and F. Charpillet, Optimally solving Dec-POMDPs as continuous-state MDPs, IJCAI, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00907338

J. S. Dibangoye, C. Amato, and A. Doniec, Scaling up decentralized MDPs through heuristic search, UAI, pp.217-226, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00765221

J. S. Dibangoye, C. Amato, A. Doniec, and F. Charpillet, Producing efficient error-bounded solutions for transition independent decentralized MDPs, AAMAS, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00918066

C. Guestrin, D. Koller, and R. Parr, Multiagent planning with factored MDPs, NIPS, pp.1523-1530, 2001.

C. Guestrin, D. Koller, R. Parr, and S. Venkataraman, Efficient solution algorithms for factored MDPs, J. Artif. Intell. Res. (JAIR), vol.19, pp.399-468, 2003.

D. Koller and R. Parr, Computing factored value functions for policies in structured MDPs Constraint-based dynamic programming for decentralized POMDPs with structured interactions, IJCAI AAMAS, pp.1332-1339, 1999.

A. Kumar and S. Zilberstein, Point-based backup for decentralized POMDPs: complexity and new algorithms, AAMAS, pp.1315-1322, 2010.

A. Kumar, S. Zilberstein, and M. Toussaint, Scalable multiagent planning using probabilistic inference, IJCAI, pp.2140-2146, 2011.

B. Kveton, M. Hauskrecht, and C. Guestrin, Solving factored MDPs with hybrid state and action variables, J. Artif. Intell. Res. (JAIR), vol.27, pp.153-201, 2006.

J. Marecki, T. Gupta, P. Varakantham, M. Tambe, and M. Yokoo, Not all agents are equal: scaling up distributed POMDPs for agent networks, AAMAS (1), pp.485-492, 2008.

R. Nair, P. Varakantham, M. Tambe, and M. Yokoo, Networked distributed POMDPs: A synthesis of distributed constraint optimization and POMDPs, AAAI, pp.133-139, 2005.

F. A. Oliehoek, Decentralized POMDPs, Reinforcement Learning: State of the Art, pp.471-503, 2012.
DOI : 10.1007/978-3-642-27645-3_15

F. A. Oliehoek, Sufficient plan-time statistics for decentralized POMDPs, IJCAI, 2013.

F. A. Oliehoek, M. T. Spaan, C. Amato, and S. Whiteson, Incremental clustering and expansion for faster optimal planning in Dec-POMDPs, pp.449-509, 2013.

F. A. Oliehoek, S. Whiteson, and M. T. Spaan, Lossless clustering of histories in decentralized POMDPs, AAMAS, pp.577-584, 2009.

F. A. Oliehoek, S. J. Witwicki, and L. P. Kaelbling, Influence-based abstraction for multiagent systems, AAAI, 2012.

R. Patrascu, P. Poupart, D. Schuurmans, C. Boutilier, and C. Guestrin, Greedy linear value-approximation for factored Markov decision processes, AAAI/IAAI, pp.285-291, 2002.

M. Petrik and S. Zilberstein, A bilinear programming approach for multiagent planning, JAIR, vol.35, pp.235-274, 2009.

M. L. Puterman, Markov Decision Processes, Discrete Stochastic Dynamic Programming, 1994.

R. D. Smallwood and E. J. Sondik, The Optimal Control of Partially Observable Markov Processes over a Finite Horizon, Operations Research, vol.21, issue.5, pp.1071-1088, 1973.
DOI : 10.1287/opre.21.5.1071

T. Smith and R. Simmons, Heuristic search value iteration for POMDPs, Proc. of UAI, pp.520-527, 2004.

D. Szer, F. Charpillet, and S. Zilberstein, MAA*: A heuristic search algorithm for solving decentralized POMDPs, UAI, pp.568-576, 2005.
URL : https://hal.archives-ouvertes.fr/inria-00000204

P. Varakantham, J. Marecki, M. Tambe, and M. Yokoo, Letting loose a SPIDER on a network of POMDPs, Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems , AAMAS '07, 2007.
DOI : 10.1145/1329125.1329388

S. J. Witwicki and E. H. Durfee, Influence-based policy abstraction for weakly-coupled Dec-POMDPs, ICAPS, pp.185-192, 2010.