J. Huang, T. Supaongprapa, I. Terakura, F. Wang, N. Ohnishi et al., A model-based sound localization system and its application to robot navigation, Robotics and Autonomous Systems, vol.27, issue.4, pp.199-209, 1999.
DOI : 10.1016/S0921-8890(99)00002-0

K. Nakadai, H. G. Okuno, and H. Kitano, Real-time sound source localization and separation for robot audition, Int. Conf. on Spoken Language Processing, pp.193-196, 2002.

S. Yamamoto, K. Nakadai, M. Nakano, H. Tsujino, J. Valin et al., Real-Time Robot Audition System That Recognizes Simultaneous Speech in The Real World, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.5333-5338, 2006.
DOI : 10.1109/IROS.2006.282037

K. Nakadai, H. G. Okuno, H. Nakajima, Y. Hasegawa, and H. Tsujino, An open source software system for robot audition HARK and its evaluation, Humanoids 2008, 8th IEEE-RAS International Conference on Humanoid Robots, pp.561-566, 2008.
DOI : 10.1109/ICHR.2008.4756031

Y. Sakagami, R. Watanabe, C. Aoyama, S. Matsunaga, N. Higaki et al., The intelligent ASIMO: system overview and integration, IEEE/RSJ International Conference on Intelligent Robots and System, pp.2478-2483, 2002.
DOI : 10.1109/IRDS.2002.1041641

S. Chu, S. Narayanan, C. Kuo, and M. J. Mataric, Where am I? Scene Recognition for Mobile Robots using Audio Features, 2006 IEEE International Conference on Multimedia and Expo, pp.885-888, 2006.
DOI : 10.1109/ICME.2006.262661

Y. Sasaki, M. Kaneyoshi, S. Kagami, H. Mizoguchi, and T. Enomoto, Daily sound recognition using Pitch-Cluster-Maps for mobile robot audition, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.2724-2729, 2009.
DOI : 10.1109/IROS.2009.5354241

N. Yamakawa, T. Takahashi, T. Kitahara, T. Ogata, and H. G. Okuno, Environmental sound recognition for robot audition using matchingpursuit, Modern Approaches in Applied Intelligence, ser. Lecture Notes in Computer Science, pp.1-10, 2011.

J. Stork, L. Spinello, J. Silva, and K. Arras, Audio-based human activity recognition using Non-Markovian Ensemble Voting, 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, pp.509-514, 2012.
DOI : 10.1109/ROMAN.2012.6343802

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=

M. Janvier, X. Alameda-pineda, L. Girin, and R. P. Horaud, Soundevent recognition with a companion humanoid, IEEE Int. Conf. on Humanoid Robotics, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00768767

L. R. Rabiner, A tutorial on Hidden Markov Models and selected applications in speech recognition, Proceedings of the IEEE, pp.257-286, 1989.

H. D. Tran and H. Li, Sound Event Recognition With Probabilistic Distance SVMs, IEEE Transactions on Audio, Speech, and Language Processing, vol.19, issue.6, pp.1556-1568, 2011.
DOI : 10.1109/TASL.2010.2093519

Y. Toyoda, J. Huang, S. Ding, and Y. Liu, Environmental sound recognition by multilayered neural networks, The Fourth International Conference onComputer and Information Technology, 2004. CIT '04., pp.123-127, 2004.
DOI : 10.1109/CIT.2004.1357184

G. Guo and S. Z. Li, Content-based audio classification and retrieval by support vector machines, IEEE Transactions on Neural Networks, vol.14, issue.1, pp.209-215, 2003.

V. Ramasubramanian, R. Karthik, S. Thiyagarajan, and S. Cherla, Continuous audio analytics by HMM and Viterbi decoding, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.2396-2399, 2011.
DOI : 10.1109/ICASSP.2011.5946966

M. Cowling and R. Sitte, Comparison of techniques for environmental sound recognition, Pattern Recognition Letters, vol.24, issue.15, pp.2895-2907, 2003.
DOI : 10.1016/S0167-8655(03)00147-8

G. Peeters, A large set of audio features for sound description (similarity and classification) in the cuidado project, 2004.

J. Saunders, Real-time discrimination of broadcast speech/music, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, pp.993-996, 1996.
DOI : 10.1109/ICASSP.1996.543290

E. Scheirer and M. Slaney, Construction and evaluation of a robust multifeature speech/music discriminator, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, pp.1331-1334, 1997.
DOI : 10.1109/ICASSP.1997.596192

S. Mallat, A wavelet tour of signal processing, 1999.

C. Lin, S. Chen, T. Truong, and Y. Chang, Audio classification and categorization based on wavelets and support vector machine, IEEE Transactions on Speech and Audio Processing, vol.13, issue.5, pp.644-651, 2005.

G. Tzanetakis, G. Essl, and P. Cook, Audio analysis using the discrete wavelet transform, Conf. in Acoust. and Music Theory App, 2001.

T. C. Walter, Auditory-based processing of communication sounds, 2011.

R. F. Lyon, M. Rehn, S. Bengio, T. C. Walters, and G. Chechik, Sound Retrieval and Ranking Using Sparse Auditory Representations, Neural Computation, vol.3, issue.9, pp.2390-2416, 2010.
DOI : 10.1109/MMSP.2002.1203270

T. Walters and W. Van-engen, AIMC: A C++ implementation of the auditory image model Available: https://code.google. com, 2012.

B. Mathieu, S. Essid, T. Fillon, J. Prado, and G. Richard, Yaafe, an easy to use and efficient audio feature extraction software, Int. Conf. for Music Information Retrieval (ISMIR), 2010.

C. M. Bishop, Pattern recognition and machine learning, 2006.

A. Temko and C. Nadeu, Classification of acoustic events using SVM-based clustering schemes, Pattern Recognition, vol.39, issue.4, pp.682-694, 2006.
DOI : 10.1016/j.patcog.2005.11.005

R. Asma, K. Hachem, L. Zied, and E. Noureddine, One-class SVMs challenges in audio detection and classification applications, EURASIP Journal on Advances in Signal Processing, 2008.

K. P. Murphy, Machine learning: A probabilistic perspective, 2012.

C. Chang and C. Lin, LIBSVM, ACM Transactions on Intelligent Systems and Technology, vol.2, issue.3, pp.1-2727, 2011.
DOI : 10.1145/1961189.1961199

A. Ito, T. Kanayama, M. Suzuki, and S. Makino, Internal noise suppression for speech recognition by small robots, European Conf. on Speech Communication and Technology, 2005.