J. Audibert, R. Munos, and C. Szepesvari, Use of variance estimation in the multi-armed bandit problem, NIPS 2006 Workshop on On-line Trading of Exploration and Exploitation, 2006.
URL : https://hal.archives-ouvertes.fr/inria-00203496

P. Auer, Using confidence bounds for exploitation-exploration tradeoffs, The Journal of Machine Learning Research, vol.3, pp.397-422, 2003.

A. Auger and O. Teytaud, Continuous Lunches Are Free Plus the Design of??Optimal Optimization Algorithms, Algorithmica, vol.1, issue.1
DOI : 10.1007/s00453-008-9244-5

URL : https://hal.archives-ouvertes.fr/inria-00369788

B. Bruegmann, Monte-Carlo Go, 1993.

T. Cazenave, Nested monte-carlo search, IJCAI, pp.456-461, 2009.
DOI : 10.1109/ipdps.2009.5161122

URL : https://hal.archives-ouvertes.fr/hal-01436286

T. Cazenave and A. Saffidine, Utilisation de la recherche arborescente Monte-Carlo au Hex, Revue d'intelligence artificielle, vol.23, issue.2-3, pp.183-202, 2009.
DOI : 10.3166/ria.23.183-202

G. Chaslot, C. Fiter, J. Hoock, A. Rimmel, and O. Teytaud, Adding Expert Knowledge and Exploration in Monte-Carlo Tree Search, Advances in Computer Games, 2009.
DOI : 10.1007/978-3-642-12993-3_1

URL : https://hal.archives-ouvertes.fr/inria-00386477

G. Chaslot, J. Saito, B. Bouzy, J. W. Uiterwijk, and H. J. Van-den-herik, Monte-Carlo Strategies for Computer Go, Proceedings of the 18th BeNeLux Conference on Artificial Intelligence, pp.83-91, 2006.

G. Chaslot, M. Winands, J. Uiterwijk, H. Van-den-herik, and B. Bouzy, Progressive strategies for monte-carlo tree search, Proceedings of the 10th Joint Conference on Information Sciences, pp.655-661, 2007.

R. Coulom, Efficient selectivity and backup operators in montecarlo tree search, Proceedings of the 5th International Conference on Computers and Games, pp.72-83, 2006.
URL : https://hal.archives-ouvertes.fr/inria-00116992

R. Coulom, Computing " elo ratings " of move patterns in the game of go, ICGA Journal, vol.30, issue.4, pp.198-208, 2007.
URL : https://hal.archives-ouvertes.fr/inria-00149859

M. Crasmaru, On the Complexity of Tsume-Go, pp.222-231, 1999.
DOI : 10.1007/3-540-48957-6_15

M. Crasmaru and J. Tromp, Ladders are PSPACE-complete, Computers and Games, pp.241-249, 2000.

F. De-mesmay, A. Rimmel, Y. Voronenko, and M. Püschel, Bandit-based optimization on graphs with application to library performance tuning, Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, 2009.
DOI : 10.1145/1553374.1553468

URL : https://hal.archives-ouvertes.fr/inria-00379523

Z. Galil and G. F. Italiano, Data structures and algorithms for disjoint set union problems, ACM Computing Surveys, vol.23, issue.3, pp.319-344, 1991.
DOI : 10.1145/116873.116878

S. Gelly and D. Silver, Combining online and offline knowledge in UCT, Proceedings of the 24th international conference on Machine learning, ICML '07, pp.273-280, 2007.
DOI : 10.1145/1273496.1273531

URL : https://hal.archives-ouvertes.fr/inria-00164003

L. R. Harris, The heuristic search and the game of chess -a study of quiescene, sacrifices, and plan oriented play, IJCAI, pp.334-339, 1975.

L. Kocsis and C. Szepesvari, Bandit Based Monte-Carlo Planning, 15th European Conference on Machine Learning (ECML), pp.282-293, 2006.
DOI : 10.1007/11871842_29

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=

T. Lai and H. Robbins, Asymptotically efficient adaptive allocation rules, Advances in Applied Mathematics, vol.6, issue.1, pp.4-22, 1985.
DOI : 10.1016/0196-8858(85)90002-8

URL : http://doi.org/10.1016/0196-8858(85)90002-8

C. Lee, M. Wang, G. Chaslot, J. Hoock, A. Rimmel et al., The Computational Intelligence of MoGo Revealed in Taiwan's Computer Go Tournaments, IEEE Transactions on Computational Intelligence and AI in games, 2009.

R. J. Lorentz, Amazons Discover Monte-Carlo, CG '08: Proceedings of the 6th international conference on Computers and Games, pp.13-24, 2008.
DOI : 10.1007/978-3-540-87608-3_2

J. Nash, Some games and machines for playing them, 1952.

. Reisch, Hex is PSPACE-complete. ACTAINF: Acta Informatica, pp.167-191, 1981.

J. M. Robson, The complexity of go, IFIP Congress, pp.413-417, 1983.

P. Rolet, M. Sebag, and O. Teytaud, Optimal active learning through billiards and upper confidence trees in continous domains, Proceedings of the ECML conference, pp.302-317, 2009.

M. P. Schadd, M. H. Winands, H. J. Van-den-herik, G. Chaslot, and J. W. Uiterwijk, Single-Player Monte-Carlo Tree Search, Computers and Games, pp.1-12, 2008.
DOI : 10.1007/978-3-540-87608-3_1

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=

R. W. Schmittberger, New Rules for Classic Games, 1992.

S. Sharma, Z. Kobti, and S. Goodwin, Knowledge Generation for Improving Simulations in UCT for General Game Playing, pp.49-55, 2008.
DOI : 10.1023/A:1013689704352

F. Teytaud and O. Teytaud, Creating an Upper-Confidence-Tree Program for Havannah, ACG 12, pp.65-74, 2009.
DOI : 10.1007/978-3-642-12993-3_7

URL : https://hal.archives-ouvertes.fr/inria-00380539

Y. Wang, J. Audibert, and R. Munos, Algorithms for infinitely many-armed bandits, Advances in Neural Information Processing Systems, 2008.

Y. Wang and S. Gelly, Modifications of UCT and sequence-like simulations for Monte-Carlo Go, 2007 IEEE Symposium on Computational Intelligence and Games, pp.175-182, 2007.
DOI : 10.1109/CIG.2007.368095

S. Zilberstein, Resource-bounded reasoning in intelligent systems, ACM Computing Surveys, vol.28, issue.4es, 1996.
DOI : 10.1145/242224.242243