M. Cakmak and M. Lopes, Algorithmic and human teaching of sequential decision tasks, AAAI Conference on Artificial Intelligence (AAAI'12), 2012.
URL : https://hal.archives-ouvertes.fr/hal-00755253

J. L. Elman, Learning and development in neural networks: the importance of starting small, Cognition, vol.48, issue.1, pp.71-80, 1993.
DOI : 10.1016/0010-0277(93)90058-4

Y. Bengio, J. Louradour, R. Collobert, and J. Weston, Curriculum learning, Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, 2009.
DOI : 10.1145/1553374.1553380

L. Berthouze and M. Lungarella, Motor Skill Acquisition Under Environmental Perturbations: On the Necessity of Alternate Freezing and Freeing of Degrees of Freedom, Adaptive Behavior, vol.12, issue.1, pp.47-64, 2004.
DOI : 10.1177/105971230401200104

A. Baranes and P. Oudeyer, The interaction of maturational constraints and intrinsic motivations in active motor development, 2011 IEEE International Conference on Development and Learning (ICDL), 2011.
DOI : 10.1109/DEVLRN.2011.6037315

URL : https://hal.archives-ouvertes.fr/hal-00646585

M. Lee, Q. Meng, and F. Chao, Staged Competence Learning in Developmental Robotics, Adaptive Behavior, vol.15, issue.3, pp.241-255, 2007.
DOI : 10.1177/1059712307082085

M. Luciw, V. Graziano, M. Ring, and J. Schmidhuber, Artificial curiosity with planning for autonomous perceptual and cognitive development, 2011 IEEE International Conference on Development and Learning (ICDL), 2011.
DOI : 10.1109/DEVLRN.2011.6037356

D. A. Cohn, Z. Ghahramani, and M. I. Jordan, Active learning with statistical models, Journal of Artificial Intelligence Research, vol.4, pp.129-145, 1996.

R. Martinez-cantin, M. Lopes, and L. Montesano, Body schema acquisition through active learning, 2010 IEEE International Conference on Robotics and Automation, 2010.
DOI : 10.1109/ROBOT.2010.5509406

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=

L. Montesano and M. Lopes, Active learning of visual descriptors for grasping using non-parametric smoothed beta distributions, Robotics and Autonomous Systems, vol.60, issue.3, pp.452-462, 2012.
DOI : 10.1016/j.robot.2011.07.013

URL : https://hal.archives-ouvertes.fr/hal-00637575

A. Baranes and P. Oudeyer, Active learning of inverse models with intrinsically motivated goal exploration in robots, Robotics and Autonomous Systems, vol.61, issue.1, 2012.
DOI : 10.1016/j.robot.2012.05.008

URL : https://hal.archives-ouvertes.fr/hal-00788440

J. Schmidhuber, On learning how to learn learning strategies, Fakultaet fuer Informatik, 1995.

Y. Baram, R. El-yaniv, and K. Luz, Online choice of active learning algorithms, The Journal of Machine Learning Research, vol.5, pp.255-291, 2004.

M. Lopes, F. S. Melo, and L. Montesano, Active Learning for Reward Estimation in Inverse Reinforcement Learning, Machine Learning and Knowledge Discovery in Databases (ECML, 2009.
DOI : 10.1007/978-3-642-04174-7_3

S. Dasgupta, Two faces of active learning, Theoretical Computer Science, vol.412, issue.19, pp.1767-1781, 2011.
DOI : 10.1016/j.tcs.2010.12.054

R. Nowak, The Geometry of Generalized Binary Search, IEEE Transactions on Information Theory, vol.57, issue.12, pp.7893-7906, 2011.
DOI : 10.1109/TIT.2011.2169298

S. Thrun, Exploration in active learning, Handbook of Brain Science and Neural Networks, pp.381-384, 1995.

R. Brafman and M. Tennenholtz, R-max -a general polynomial time algorithm for near-optimal reinforcement learning, The Journal of Machine Learning Research, vol.3, pp.213-231, 2003.

J. Schmidhuber, A possibility for implementing curiosity and boredom in model-building neural controllers, From Animals to Animats: First International Conference on Simulation of Adaptive Behavior, pp.222-227, 1991.

P. Oudeyer, F. Kaplan, and V. Hafner, Intrinsic Motivation Systems for Autonomous Mental Development, IEEE Transactions on Evolutionary Computation, vol.11, issue.2, pp.265-286, 2007.
DOI : 10.1109/TEVC.2006.890271

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=

J. Schmidhuber, Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts, Connection Science, vol.1, issue.2, pp.173-187, 2006.
DOI : 10.1080/09540090600768658

A. Barto, S. Singh, and N. Chentanez, Intrinsically motivated learning of hierarchical collections of skills, International Conference on development and learning (ICDL'04), 2004.

G. Baldassarre, What are intrinsic motivations? A biological perspective, 2011 IEEE International Conference on Development and Learning (ICDL), 2011.
DOI : 10.1109/DEVLRN.2011.6037367

S. Singh, R. Lewis, and A. Barto, Where do rewards come from, Annual Conference of the Cognitive Science Society, 2009.

D. Angluin, Queries and concept learning, Machine Learning, pp.319-342, 1988.
DOI : 10.1007/BF00116828

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=

D. Golovin, A. Krause, and D. Ray, Near-optimal bayesian active learning with noisy observations, Proc. Neural Information Processing Systems (NIPS), 2010.

D. Golovin and A. Krause, Adaptive submodularity: A new approach to active learning and stochastic optimization, Proc. International Conference on Learning Theory (COLT), 2010.

P. Auer, N. Cesa-bianchi, Y. Freund, and R. Schapire, The Nonstochastic Multiarmed Bandit Problem, SIAM Journal on Computing, vol.32, issue.1, pp.48-77, 2003.
DOI : 10.1137/S0097539701398375

A. Baranès and P. Oudeyer, R-iac: Robust intrinsically motivated exploration and active learning Autonomous Mental Development, IEEE Transactions on, vol.1, issue.3, pp.155-169, 2009.

N. Entwistle, Promoting deep learning through teaching and assessment: conceptual frameworks and educational contexts, TLRP Conference, 2000.

G. Nemhauser, L. Wolsey, and M. Fisher, An analysis of approximations for maximizing submodular set functions???I, Mathematical Programming, pp.265-294, 1978.
DOI : 10.1007/BF01588971

A. Krause and C. Guestrin, Near-optimal nonmyopic value of information in graphical models, Uncertainty in AI, 2005.

V. Gabillon, A. Lazaric, M. Ghavamzadeh, S. Bubeck, A. Carpentier et al., Multi-bandit best arm identification Upper confidence bounds algorithms for active learning in multi-armed bandits, Neural Information Processing Systems (NIPS'11) Algorithmic Learning Theory, 2011.

C. M. Bishop, Pattern recognition and machine learning, 2006.

L. Li, M. Littman, T. Walsh, and A. Strehl, Knows what it knows, Proceedings of the 25th international conference on Machine learning, ICML '08, pp.399-443, 2011.
DOI : 10.1145/1390156.1390228

M. Hoffman, E. Brochu, and N. De-freitas, Portfolio allocation for bayesian optimization, Uncertainty in artificial intelligence, pp.327-336, 2011.

T. Hester and P. Stone, Intrinsically motivated model learning for a developing curious agent, 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL), 2012.
DOI : 10.1109/DevLrn.2012.6400802

M. Lopes, T. Lang, M. Toussaint, and P. Oudeyer, Exploration in model-based reinforcement learning by empirically estimating learning progress, Neural Information Processing Systems (NIPS'12), 2012.
URL : https://hal.archives-ouvertes.fr/hal-00755248

G. Qi, X. Hua, Y. Rui, J. Tang, and H. Zhang, Two-dimensional active learning for image classification, Computer Vision and Pattern Recognition (CVPR'08), 2008.

R. Reichart, K. Tomanek, U. Hahn, and A. Rappoport, Multi-task active learning for linguistic annotations, 2008.

P. Oudeyer, F. Kaplan, V. Hafner, and A. Whyte, The playground experiment: Task-independent development of a curious robot, AAAI Spring Symposium on Developmental Robotics, pp.42-47, 2005.

L. Jamone, L. Natale, K. Hashimoto, G. Sandini, and A. Takanishi, Learning task space control through goal directed exploration, 2011 IEEE International Conference on Robotics and Biomimetics, 2011.
DOI : 10.1109/ROBIO.2011.6181368

M. Rolf, J. Steil, and M. Gienger, Online Goal Babbling for rapid bootstrapping of inverse models in high dimensions, 2011 IEEE International Conference on Development and Learning (ICDL), 2011.
DOI : 10.1109/DEVLRN.2011.6037368

B. Price and C. Boutilier, Accelerating reinforcement learning through implicit imitation, J. Artificial Intelligence Research, vol.19, pp.569-629, 2003.

A. P. Shon, D. Verma, and R. P. Rao, Active imitation learning, AAAI Conference on Artificial Intelligence (AAAI'07), 2007.

S. Nguyen, A. Baranes, and P. Oudeyer, Bootstrapping intrinsically motivated learning with human demonstration, 2011 IEEE International Conference on Development and Learning (ICDL), 2011.
DOI : 10.1109/DEVLRN.2011.6037329

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=

G. Konidaris and A. Barto, Sensorimotor abstraction selection for efficient, autonomous robot skill acquisition, 2008 7th IEEE International Conference on Development and Learning, 2008.
DOI : 10.1109/DEVLRN.2008.4640821

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=

O. A. Maillard, R. Munos, and D. Ryabko, Selecting the staterepresentation in reinforcement learning, Advances in Neural Information Processing Systems, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00639483

A. Krause, A. Singh, and C. Guestrin, Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies, Journal of Machine Learning Research, vol.9, pp.235-284, 2008.

S. M. Nguyen, S. Ivaldi, N. Lyubova, A. Droniou, D. Gérardeaux-viret et al., Learning to recognize objects through curiosity-driven manipulation, under review, 2013.
DOI : 10.1109/devlrn.2013.6652525

URL : https://hal.archives-ouvertes.fr/hal-00919674

A. Mcgovern and A. G. Barto, Automatic discovery of subgoals in reinforcement learning using diverse density, International Conference on Machine Learning (ICML'01), 2001.

O. S. Ims¸ekims¸ek and A. G. Barto, Using relative novelty to identify useful temporal abstractions in reinforcement learning, International Conference on Machine Learning, 2004.

J. Schmidhuber, PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem, Frontiers in Psychology, vol.4, 2011.
DOI : 10.3389/fpsyg.2013.00313

Y. Sun, F. Gomez, and J. Schmidhuber, Planning to Be Surprised: Optimal Bayesian Exploration in Dynamic Environments, Artificial General Intelligence, pp.41-51, 2011.
DOI : 10.1007/978-3-642-22887-2_5

J. Elman, Rethinking innateness: A connectionist perspective on development, 1997.

M. Lapeyre, O. Ly, and P. Oudeyer, Maturational constraints for motor learning in high-dimensions: The case of biped walking, 2011 11th IEEE-RAS International Conference on Humanoid Robots, pp.707-714, 2011.
DOI : 10.1109/Humanoids.2011.6100909

URL : https://hal.archives-ouvertes.fr/hal-00649333

S. Singh, R. L. Lewis, A. G. Barto, and J. Sorg, Intrinsically Motivated Reinforcement Learning: An Evolutionary Perspective, IEEE Transactions on Autonomous Mental Development, vol.2, issue.2, 2010.
DOI : 10.1109/TAMD.2010.2051031

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=