C. Atkeson, No falls, no resets: Reliable humanoid behavior in the DARPA robotics challenge, 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), 2015.
DOI : 10.1109/HUMANOIDS.2015.7363436

Y. Lecun, Y. Bengio, and G. Hinton, Deep learning, Nature, vol.9, issue.7553, pp.436-444, 2015.
DOI : 10.1007/s10994-013-5335-x

J. Deng, W. Dong, R. Socher, L. Li, K. Li et al., ImageNet: A large-scale hierarchical image database, 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009.
DOI : 10.1109/CVPR.2009.5206848

V. Mnih, Human-level control through deep reinforcement learning, Nature, vol.101, issue.7540, pp.529-533, 2015.
DOI : 10.1016/S0004-3702(98)00023-X

M. P. Deisenroth, G. Neumann, and J. Peters, A Survey on Policy Search for Robotics, Foundations and Trends in Robotics, vol.2, issue.1-2, pp.1-142, 2013.
DOI : 10.1561/2300000021

URL : http://www.ias.tu-darmstadt.de/uploads/Publications/Deisenroth_ICRA_2014.pdf

A. S. Polydoros and L. Nalpantidis, Survey of Model-Based Reinforcement Learning: Applications on Robotics, Journal of Intelligent & Robotic Systems, vol.84, issue.3, pp.1-21, 2017.
DOI : 10.1109/IROS.2015.7353857

M. P. Deisenroth, D. Fox, and C. E. Rasmussen, Gaussian Processes for Data-Efficient Learning in Robotics and Control, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.37, issue.2, pp.408-423, 2015.
DOI : 10.1109/TPAMI.2013.218

K. Chatzilygeroudis, R. Rama, R. Kaushik, D. Goepp, V. Vassiliades et al., Black-box data-efficient policy search for robotics, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017.
DOI : 10.1109/IROS.2017.8202137

URL : https://hal.archives-ouvertes.fr/hal-01576683

E. Keogh and A. Mueen, Curse of Dimensionality, Encyclopedia of Machine Learning, pp.257-258, 2011.
DOI : 10.14778/1454159.1454226

M. Cutler and J. P. How, Efficient reinforcement learning for robots using informative simulated priors, 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015.
DOI : 10.1109/ICRA.2015.7139550

URL : http://dspace.mit.edu/bitstream/1721.1/109303/1/How_Efficient%20reinforcement.pdf

G. Lee, S. S. Srinivasa, and M. T. Mason, GP-ILQG: Data-driven Robust Optimal Control for Uncertain Nonlinear Dynamical Systems, 2017.

M. Saveriano, Y. Yin, P. Falco, and D. Lee, Data-efficient control policy search using residual dynamics learning, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017.
DOI : 10.1109/IROS.2017.8206343

B. Bischoff, D. Nguyen-tuong, H. Van-hoof, A. Mchutchon, C. E. Rasmussen et al., Policy search for learning robot control using sparse data, 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014.
DOI : 10.1109/ICRA.2014.6907422

A. Cully, J. Clune, D. Tarapore, and J. Mouret, Robots that can adapt like animals, Nature, vol.26, issue.7553, pp.503-507, 2015.
DOI : 10.1038/nrn2332

URL : https://hal.archives-ouvertes.fr/hal-01158243

A. Marco, F. Berkenkamp, P. Hennig, A. P. Schoellig, A. Krause et al., Virtual vs. real: Trading off simulations and physical experiments in reinforcement learning with Bayesian optimization, 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017.
DOI : 10.1109/ICRA.2017.7989186

D. Nguyen-tuong and J. Peters, Using model knowledge for learning inverse dynamics, 2010 IEEE International Conference on Robotics and Automation, 2010.
DOI : 10.1109/ROBOT.2010.5509858

R. Camoriano, S. Traversaro, L. Rosasco, G. Metta, and F. Nori, Incremental semiparametric inverse dynamics learning, 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016.
DOI : 10.1109/ICRA.2016.7487177

URL : http://arxiv.org/pdf/1601.04549

T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez et al., Continuous control with deep reinforcement learning, 2015.

J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel, Trust region policy optimization, Proc. of ICML, 2015.

J. Kober and J. Peters, Policy search for motor primitives in robotics, Machine Learning, pp.171-203, 2011.

E. Theodorou, J. Buchli, and S. Schaal, A generalized path integral control approach to reinforcement learning, JMLR, vol.11, pp.3137-3181, 2010.

D. Wierstra, Natural evolution strategies Completely derandomized selfadaptation in evolution strategies, Evolutionary computation, vol.1523, issue.9 2, pp.949-980, 2001.

R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, IEEE Transactions on Neural Networks, vol.9, issue.5, 1998.
DOI : 10.1109/TNN.1998.712192

J. Kober and J. Peters, Imitation and Reinforcement Learning, IEEE Robotics & Automation Magazine, vol.17, issue.2, pp.55-62, 2010.
DOI : 10.1109/MRA.2010.936952

F. Stulp and O. Sigaud, Abstract, Paladyn, Journal of Behavioral Robotics, vol.4, issue.1, pp.49-61, 2013.
DOI : 10.2478/pjbr-2013-0003

A. Cully and J. Mouret, Behavioral repertoire learning in robotics, Proceeding of the fifteenth annual conference on Genetic and evolutionary computation conference, GECCO '13, 2013.
DOI : 10.1145/2463372.2463399

URL : https://hal.archives-ouvertes.fr/hal-00841958

A. Majumdar and R. Tedrake, Funnel libraries for real-time robust feedback motion planning, The International Journal of Robotics Research, vol.15, issue.8, pp.947-982, 2017.
DOI : 10.1109/CDC.2012.6426684

URL : http://dspace.mit.edu/bitstream/1721.1/106033/1/965380239-MIT.pdf

R. Antonova, A. Rai, and C. G. Atkeson, Sample efficient optimization for learning controllers for bipedal locomotion, 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), 2016.
DOI : 10.1109/HUMANOIDS.2016.7803249

URL : http://arxiv.org/pdf/1610.04795

J. Mouret and J. Clune, Illuminating search spaces by mapping elites, 2015.

V. Vassiliades, K. Chatzilygeroudis, and J. Mouret, Using Centroidal Voronoi Tessellations to Scale Up the Multi-dimensional Archive of Phenotypic Elites Algorithm, IEEE Transactions on Evolutionary Computation, 2017.
DOI : 10.1109/TEVC.2017.2735550

URL : https://hal.archives-ouvertes.fr/hal-01630627

B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De-freitas, Taking the Human Out of the Loop: A Review of Bayesian Optimization, Proc. of the IEEE, pp.148-175, 2016.
DOI : 10.1109/JPROC.2015.2494218

J. Ko, D. J. Klein, D. Fox, and D. Haehnel, Gaussian Processes and Reinforcement Learning for Identification and Control of an Autonomous Blimp, Proceedings 2007 IEEE International Conference on Robotics and Automation, 2007.
DOI : 10.1109/ROBOT.2007.363075

E. Todorov and W. Li, A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems, Proceedings of the 2005, American Control Conference, 2005., 2005.
DOI : 10.1109/ACC.2005.1469949

J. Hollerbach, W. Khalil, and M. Gautier, Model identification, Springer Handbook of Robotics, pp.113-138, 2016.

M. Gautier and W. Khalil, Exciting trajectories for the identification of base inertial parameters of robots, pp.362-375, 1992.

F. Aghili, J. M. Hollerbach, and M. Buehler, A Modular and High-Precision Motion Control System With an Integrated Motor, IEEE/ASME Transactions on Mechatronics, vol.12, issue.3, pp.317-329, 2007.
DOI : 10.1109/TMECH.2007.897273

C. Xie, S. Patil, T. Moldovan, S. Levine, and P. Abbeel, Modelbased reinforcement learning with parametrized physical models and optimism-driven exploration, Proc. of ICRA, 2016.
DOI : 10.1109/icra.2016.7487172

URL : http://arxiv.org/pdf/1509.06824

J. Mouret and K. Chatzilygeroudis, 20 years of reality gap, Proceedings of the Genetic and Evolutionary Computation Conference Companion on , GECCO '17, p.2017
DOI : 10.1109/JPROC.2015.2494218

URL : https://hal.archives-ouvertes.fr/hal-01518764

K. Chatzilygeroudis, V. Vassiliades, and J. Mouret, Resetfree Trial-and-Error Learning for Robot Damage Recovery, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01654641

Y. Engel, S. Mannor, and R. Meir, Reinforcement learning with Gaussian processes, Proceedings of the 22nd international conference on Machine learning , ICML '05, 2005.
DOI : 10.1145/1102351.1102377

URL : http://www-ee.technion.ac.il/~rmeir/Publications/EngelMannorMeirICML05.pdf

D. Nguyen-tuong and J. Peters, Model learning for robot control: a survey, Cognitive Processing, vol.11, issue.11, pp.319-340, 2011.
DOI : 10.1016/S0893-6080(98)00066-5

C. E. Rasmussen and C. K. Williams, Gaussian Processes in Machine Learning, 2006.
DOI : 10.1162/089976602317250933

M. Blum and M. A. Riedmiller, Optimization of Gaussian process hyperparameters using Rprop, Proc. of ESANN, 2013.

A. Cully, K. Chatzilygeroudis, F. Allocati, and J. Mouret, Limbo: A fast and flexible library for Bayesian optimization, pp.1611-07343, 2016.

T. H. Rowan, Functional stability analysis of numerical algorithms, 1990.

G. Steven, The NLopt nonlinear-optimization package

A. Kupcsik, M. P. Deisenroth, J. Peters, A. P. Loh, P. Vadakkepat et al., Model-based contextual policy search for data-efficient generalization of robot skills, Artificial Intelligence, vol.247, 2014.
DOI : 10.1016/j.artint.2014.11.005

M. W. Spong and D. J. Block, The Pendubot: a mechatronic system for control research and education, Proceedings of 1995 34th IEEE Conference on Decision and Control, 1995.
DOI : 10.1109/CDC.1995.478951

J. Lee, DART: Dynamic Animation and Robotics Toolkit, The Journal of Open Source Software, vol.3, issue.22, 2018.
DOI : 10.1177/027836499501400606

R. Isermann, Fault-diagnosis systems: an introduction from fault detection to fault tolerance, 2006.
DOI : 10.1007/3-540-30368-5