P. Abbeel and A. Ng, Apprenticeship learning via inverse reinforcement learning, Twenty-first international conference on Machine learning , ICML '04, 2004.
DOI : 10.1145/1015330.1015430

R. Akrour, M. Schoenauer, M. Sebag, and . Gunopulos, Preference-Based Policy Learning, pp.12-27
DOI : 10.1007/978-3-642-23780-5_11

URL : https://hal.archives-ouvertes.fr/inria-00625001

C. Bergeron, J. Zaretzki, C. M. Breneman, and K. P. Bennett, Multiple instance ranking, Proceedings of the 25th international conference on Machine learning, ICML '08, pp.48-55, 2008.
DOI : 10.1145/1390156.1390163

E. Brochu, N. De-freitas, and A. Ghosh, Active preference learning with discrete choice data, Advances in Neural Information Processing Systems 20, pp.409-416, 2008.

S. Calinon, F. Guenter, and A. Billard, On Learning, Representing, and Generalizing a Task in a Humanoid Robot, IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), vol.37, issue.2, pp.286-298, 2007.
DOI : 10.1109/TSMCB.2006.886952

W. Cheng, J. Fürnkranz, E. Hüllermeier, and S. H. Park, Preference-Based Policy Iteration: Leveraging Preference Learning for Reinforcement Learning, pp.312-327
DOI : 10.1007/978-3-642-23780-5_30

C. Cortes and V. Vapnik, Support-vector networks, Machine Learning, vol.1, issue.3, pp.273-297, 1995.
DOI : 10.1007/BF00994018

S. Dasgupta, Coarse sample complexity bounds for active learning, Advances in Neural Information Processing Systems, 2005.

R. Duda and P. Hart, Pattern Classification and scene analysis, 1973.

H. Hachiya and M. Sugiyama, Feature Selection for Reinforcement Learning: Evaluating Implicit State-Reward Dependency via Conditional Mutual Information, Proc. ECML/PKDD, pp.474-489, 2010.
DOI : 10.1007/978-3-642-15880-3_36

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=

N. Hansen and A. Ostermeier, Completely Derandomized Self-Adaptation in Evolution Strategies, Evolutionary Computation, vol.9, issue.2, pp.159-195, 2001.
DOI : 10.1016/0004-3702(95)00124-7

V. Heidrich-meisner and C. Igel, Hoeffding and Bernstein races for selecting policies in evolutionary direct policy search, Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, p.51, 2009.
DOI : 10.1145/1553374.1553426

R. Herbrich, T. Graepel, and C. Campbell, Bayes point machines, Journal of Machine Learning Research, vol.1, pp.245-279, 2001.

T. Joachims, A support vector method for multivariate performance measures, Proceedings of the 22nd international conference on Machine learning , ICML '05, pp.377-384, 2005.
DOI : 10.1145/1102351.1102399

T. Joachims, Training linear SVMs in linear time, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining , KDD '06, pp.217-226, 2006.
DOI : 10.1145/1150402.1150429

D. Jones, M. Schonlau, and W. Welch, Efficient global optimization of expensive black-box functions, Journal of Global Optimization, vol.13, issue.4, pp.455-492, 1998.
DOI : 10.1023/A:1008306431147

J. Z. Kolter, P. Abbeel, and A. Y. Ng, Hierarchical apprenticeship learning with application to quadruped locomotion, 2007.

G. Konidaris, S. Kuindersma, A. Barto, and R. Grupen, Constructing skill trees for reinforcement learning agents from demonstration trajectories, Advances in Neural Information Processing Systems 23, pp.1162-1170, 2010.

M. Lagoudakis and R. Parr, Least-squares policy iteration, Journal of Machine Learning Research (JMLR), vol.4, pp.1107-1149, 2003.

C. Liu, Q. Chen, and D. Wang, Locomotion control of quadruped robots based on cpginspired workspace trajectory generation, Proc. ICRA, pp.1250-1255, 2011.

A. Ng and S. Russell, Algorithms for inverse reinforcement learning, Proc. of the Seventeenth International Conference on Machine Learning (ICML-00, pp.663-670, 2000.

J. Oregan and A. Noë, A sensorimotor account of vision and visual consciousness, Behavioral and Brain Sciences, vol.24, issue.05, p.939973, 2001.
DOI : 10.1017/S0140525X01000115

J. Peters and S. Schaal, Reinforcement learning of motor skills with policy gradients, Neural Networks, vol.21, issue.4, pp.682-697, 2008.
DOI : 10.1016/j.neunet.2008.02.003

R. Sutton and A. Barto, Reinforcement Learning: An Introduction, IEEE Transactions on Neural Networks, vol.9, issue.5, 1998.
DOI : 10.1109/TNN.1998.712192

C. Szepesvári, Algorithms for Reinforcement Learning, Synthesis Lectures on Artificial Intelligence and Machine Learning, vol.4, issue.1, 2010.
DOI : 10.2200/S00268ED1V01Y201005AIM009

I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun, Large margin methods for structured and interdependent output variables, Journal of Machine Learning Research, vol.6, pp.1453-1484, 2005.

P. Viappiani, Monte Carlo Methods for Preference Learning, Proc. Learning and Intelligent OptimizatioN, LION 6, 2012.
DOI : 10.1007/978-3-642-34413-8_52

P. Viappiani and C. Boutilier, Optimal Bayesian recommendation sets and myopically optimal choice query sets, pp.2352-2360, 2010.

S. Whiteson, M. E. Taylor, and P. Stone, Critical factors in the empirical performance of temporal difference and evolutionary methods for reinforcement learning, Autonomous Agents and Multi-Agent Systems, vol.87, issue.9, pp.1-27, 2010.
DOI : 10.1007/s10458-009-9100-2

K. Zhao, M. R. Zeng, and D. , Reinforcement learning design for cancer clinical trials, Statistics in Medicine, vol.22, issue.1, 2009.
DOI : 10.1002/sim.3720