C. Amato, D. Bernstein, and S. Zilberstein, Optimizing fixed-size stochastic controllers for POMDPs and decentralized POMDPs, Autonomous Agents and Multi-Agent Systems, vol.24, issue.3, pp.293-320, 2010.
DOI : 10.1007/s10458-009-9103-z

C. Amato, J. Dibangoye, and S. Zilberstein, Incremental policy generation for finite-horizon Dec-POMDPs, p.ICAPS, 1086.

C. Amato and S. Zilberstein, Achieving goals in decentralized POMDPs, p.AAMAS, 2009.

A. Bagnell, S. Kakade, A. Ng, and J. Schneider, Policy search by dynamic programming, In: NIPS, vol.16, 2003.

D. S. Bernstein, C. Amato, E. A. Hansen, and S. Zilberstein, Policy iteration for decentralized control of Markov decision processes, Journal of Artificial Intelligence Research, vol.34, pp.89-132, 2009.

D. S. Bernstein, R. Givan, N. Immerman, and S. Zilberstein, The Complexity of Decentralized Control of Markov Decision Processes, Mathematics of Operations Research, vol.27, issue.4, pp.819-840, 2002.
DOI : 10.1287/moor.27.4.819.297

J. S. Dibangoye, C. Amato, O. Buffet, and F. Charpillet, Optimally solving Dec-POMDPs as continuous-state MDPs, p.IJCAI, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00907338

J. S. Dibangoye, C. Amato, O. Buffet, and F. Charpillet, Optimally solving Dec-POMDPs as continuous-state MDPs: Theory and algorithms, 2014.
URL : https://hal.archives-ouvertes.fr/hal-00975802

J. S. Dibangoye, A. I. Mouaddib, and B. Chai-draa, Point-based incremental pruning heuristic for solving finite-horizon Dec-POMDPs, p.AAMAS, 2009.

J. S. Dibangoye, A. I. Mouaddib, and B. Chaib-draa, Toward error-bounded algorithms for infinite-horizon Dec-POMDPs, p.AAMAS, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00969579

S. De-givry, F. Heras, M. Zytnicki, and J. Larrosa, Existential arc consistency: Getting closer to full arc consistency in weighted CSPs, p.IJCAI, 2005.

E. A. Hansen, D. S. Bernstein, and S. Zilberstein, Dynamic programming for partially observable stochastic games, p.AAAI, 2004.

A. Kumar and S. Zilberstein, Point-based backup for decentralized POMDPs: Complexity and new algorithms, p.AAMAS, 2010.

L. C. Macdermed and C. Isbell, Point based value iteration with optimal belief compression for Dec-POMDPs, p.NIPS, 2013.

F. A. Oliehoek, M. T. Spaan, J. S. Dibangoye, and C. Amato, Heuristic search for identical payoff Bayesian games, p.AAMAS, 2010.

J. Pajarinen and J. Peltonen, Periodic finite state controllers for efficient POMDP and DEC- POMDP planning, p.NIPS, 2011.

J. Pineau, G. Gordon, and S. Thrun, Point-based value iteration: An anytime algorithm for POMDPs, p.IJCAI, 2003.

M. L. Puterman, Markov Decision Processes, Discrete Stochastic Dynamic Programming, 1994.

B. Scherrer, Improved and Generalized Upper Bounds on the Complexity of Policy Iteration, Mathematics of Operations Research, vol.41, issue.3, p.NIPS, 2013.
DOI : 10.1287/moor.2015.0753

URL : https://hal.archives-ouvertes.fr/hal-00829532

S. Seuken and S. Zilberstein, Formal models and algorithms for decentralized decision making under uncertainty, Autonomous Agents and Multi-Agent Systems, vol.7, issue.4, pp.190-250, 2008.
DOI : 10.1007/s10458-007-9026-5

T. Smith and R. G. Simmons, Point-based POMDP algorithms: Improved analysis and implementation, In: UAI, 2005.