J. Bachorowski, M. J. , S. , M. J. , and O. , The acoustic features of human laughter, The Journal of the Acoustical Society of America, vol.110, issue.3, pp.1581-97, 2001.
DOI : 10.1121/1.1391244

C. Becker-asano and H. Ishiguro, Laughter in social robotics -no laughing matter, In: International workshop on social intelligence design, pp.287-300, 2009.

C. Becker-asano, T. Kanda, C. Ishi, and H. Ishiguro, How about laughter? Perceived naturalness of two laughing humanoid robots, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pp.1-6, 2009.
DOI : 10.1109/ACII.2009.5349371

D. Bernhardt and P. Robinson, Detecting Affect from Non-stylised Body Motions, Affective Computing and Intelligent Interaction, pp.59-70, 2007.
DOI : 10.1007/978-3-540-74889-2_6

P. Bourgeois and U. Hess, The impact of social context on mimicry, Biological Psychology, vol.77, issue.3, pp.343-352, 2008.
DOI : 10.1016/j.biopsycho.2007.11.008

M. Brand, Voice puppetry, Proceedings of the 26th annual conference on Computer graphics and interactive techniques , SIGGRAPH '99, pp.21-28, 1999.
DOI : 10.1145/311535.311537

C. Bregler, M. Covell, and M. Slaney, Video Rewrite, Proceedings of the 24th annual conference on Computer graphics and interactive techniques , SIGGRAPH '97, pp.353-360, 1997.
DOI : 10.1145/258734.258880

R. Cai, L. Lu, H. Zhang, and L. Cai, Highlight sound effects detection in audio stream, Proceedings of the 2003 IEEE International Conference on Multimedia and Expo (ICME), pp.37-40, 2003.

G. Castellano, S. D. Villalba, and A. Camurri, Recognising Human Emotions from Body Movement and Gesture Dynamics, Affective computing and intelligent interaction, pp.71-82, 2007.
DOI : 10.1007/978-3-540-74889-2_7

M. M. Cohen and D. W. Massaro, Modeling Coarticulation in Synthetic Visual Speech, Models and Techniques in Computer Animation, pp.139-156, 1993.
DOI : 10.1007/978-4-431-66911-1_13

D. Cosker and J. Edge, Laughing, crying, sneezing and yawning: Automatic voice driven animation of non-speech articulations, Proc. of Computer Animation and Social Agents (CASA09), pp.21-24, 2009.

Z. Deng, J. Lewis, and U. Neumann, Synthesizing speech animation by learning compact speech co-articulation models, In: Computer Graphics International, pp.19-25, 2005.

P. C. Dilorenzo, V. B. Zordan, and B. L. Sanders, Laughing out loud: control for modeling anatomically inspired laughter using audio, ACM Transactions on Graphics (TOG), vol.27, issue.5, p.125, 2008.

T. Ezzat, G. Geiger, and T. Poggio, Trainable videorealistic speech animation In: Automatic Face and Gesture Recognition Authors Suppressed Due to Excessive Length, Proceedings. Sixth IEEE International Conference on, pp.57-64, 2004.

S. Fukushima, Y. Hashimoto, T. Nozawa, and H. Kajimoto, Laugh enhancer using laugh track synchronized with the user's laugh motion, Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems, CHI EA '10, 2010.
DOI : 10.1145/1753846.1754027

S. W. Gilroy, M. Cavazza, M. Niranen, E. Andre, T. Vogt et al., PAD-based multimodal affective fusion, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009.
DOI : 10.1109/ACII.2009.5349552

S. Gosling, P. J. Rentfrow, and W. B. Swann, A very brief measure of the Big-Five personality domains, Journal of Research in Personality, vol.37, issue.6, pp.504-528, 2003.
DOI : 10.1016/S0092-6566(03)00046-1

J. Hofmann, T. Platt, J. Urbain, R. Niewiadomski, and W. Ruch, Laughing avatar interaction evaluation form, 2012.

L. Kennedy and D. Ellis, Laughter detection in meetings, NIST ICASSP 2004 Meeting Recognition Workshop, pp.118-121, 2004.

A. Kleinsmith and N. Bianchi-berthouze, Affective Body Expression Perception and Recognition: A Survey, IEEE Transactions on Affective Computing, vol.4, issue.1, 2012.
DOI : 10.1109/T-AFFC.2012.16

A. Kleinsmith, N. Bianchi-berthouze, and A. Steed, Automatic Recognition of Non-Acted Affective Postures, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol.41, issue.4, pp.1027-1038, 2011.
DOI : 10.1109/TSMCB.2010.2103557

M. T. Knox and N. Mirghafori, Automatic laughter detection using neural networks, Proceedings of Interspeech 2007, pp.2973-2976, 2007.

S. Kshirsagar and N. Magnenat-thalmann, Visyllable Based Speech Animation, Computer Graphics Forum, vol.31, issue.2, pp.632-640, 2003.
DOI : 10.1016/S0364-0213(99)80001-9

E. Lasarcyk and J. Trouvain, Imitating conversational laughter with an articulatory speech synthesis, Proceedings of the Interdisciplinary Workshop on The Phonetics of Laughter, pp.43-48, 2007.

I. Leite, G. Castellano, A. Pereira, C. Martinho, and A. Paiva, Modelling empathic behaviour in a robotic game companion for children, Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, HRI '12, pp.367-374, 2012.
DOI : 10.1145/2157689.2157811

W. Liu, B. Yin, X. Jia, and D. Kong, Audio to visual signal mappings with hmm, IEEE International Conference on Acoustics, Speech, and Signal Processing, p.4, 2004.

M. Mancini, D. Glowinski, and A. Massari, Realtime Expressive Movement Detection Using the EyesWeb XMI Platform, INTETAIN. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, pp.221-222, 2011.
DOI : 10.1007/978-3-642-30214-5_25

M. Mancini, J. Hofmann, T. Platt, G. Volpe, G. Varni et al., Towards Automated Full Body Detection of Laughter Driven by Human Expert Annotation, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, pp.757-762, 2013.
DOI : 10.1109/ACII.2013.140

M. Mancini, G. Varni, D. Glowinski, and G. Volpe, Computing and evaluating the body laughter index. Human Behavior Understanding pp, pp.90-98, 2012.

H. Meng, A. Kleinsmith, and N. Bianchi-berthouze, Multi-score learning for affect recognition: the case of body postures, Affective Computing and Intelligent Interaction, pp.225-234, 2011.

R. Niewiadomski, M. Mancini, T. Baur, G. Varni, H. Griffin et al., MMLI: Multimodal Multiperson Corpus of Laughter in Interaction, conjunction with ACM Multimedia'2013, 2013.
DOI : 10.1007/978-3-319-02714-2_16

R. Niewiadomski and C. Pelachaud, Towards Multimodal Expression of Laughter, Proceedings of the 12th international conference on Intelligent Virtual Agents, pp.231-244, 2012.
DOI : 10.1007/978-3-642-33197-8_24

R. Niewiadomski, J. Hofmann, J. Urbain, T. Platt, J. Wagner et al., Laugh-aware virtual agent and its impact on user amusement, Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems AAMAS '13, International Foundation for Autonomous Agents and Multiagent Systems, pp.619-626, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00869751

K. Oura, Hmm-based speech synthesis system (hts) [computer program webpage], 2011.

M. D. Owens, It's All in the Game: Gamification, Games, and Gambling, Gaming Law Review and Economics, vol.16, issue.3, 2012.
DOI : 10.1089/glre.2012.1634

S. Petridis and M. Pantic, Audiovisual discrimination between laughter and speech, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp.5117-5120, 2008.
DOI : 10.1109/ICASSP.2008.4518810

S. Petridis and M. Pantic, Is this joke really funny? judging the mirth by audiovisual laughter analysis, 2009 IEEE International Conference on Multimedia and Expo, pp.1444-1447, 2009.
DOI : 10.1109/ICME.2009.5202774

E. A. Poe, Maelzel's chess-player, pp.318-326, 1836.

B. Qu, S. Pammi, R. Niewiadomski, and G. Chollet, Estimation of FAPs and intensities of AUs based on real-time face tracking, Proceedings of the 3rd Symposium on Facial Analysis and Animation, FAA '12, 2012.
DOI : 10.1145/2491599.2491612

W. Ruch and P. Ekman, The expressive pattern of laughter) Emotion, qualia and consciousness, pp.426-443, 2001.

W. Ruch and R. Proyer, Abstract, Humor - International Journal of Humor Research, vol.22, issue.1-2, pp.183-212, 2009.
DOI : 10.1515/HUMR.2009.009

T. Ruf, A. Ernst, and C. Küblbeck, Face Detection with the Sophisticated High-speed Object Recognition Engine (SHORE), Microelectronic Systems, pp.243-252978, 2011.
DOI : 10.1007/978-3-642-23071-4_23

S. Scherer, M. Glodek, F. Schwenker, N. Campbell, and G. Palm, Spotting laughter in natural multiparty conversations, ACM Transactions on Interactive Intelligent Systems, vol.2, issue.1, pp.1-431, 2012.
DOI : 10.1145/2133366.2133370

S. Sundaram and S. Narayanan, Automatic acoustic synthesis of human-like laughter, The Journal of the Acoustical Society of America, vol.121, issue.1, pp.527-535, 2007.
DOI : 10.1121/1.2390679

K. Tokuda, T. Yoshimura, T. Masuko, T. Kobayashi, and T. Kitamura, Speech parameter generation algorithms for hmm-based speech synthesis Authors Suppressed Due to Excessive Length, pp.1315-1318, 2000.

K. Tokuda, H. Zen, and A. Black, An HMM-based speech synthesis system applied to English, Proceedings of the 2002 IEEE Speech Synthesis Workshop, pp.227-230, 2002.

K. P. Truong and D. A. Van-leeuwen, Automatic discrimination between laughter and speech, Speech Communication, vol.49, issue.2, pp.144-158, 2007.
DOI : 10.1016/j.specom.2007.01.001

URL : https://hal.archives-ouvertes.fr/hal-00499165

J. Urbain, H. Dutoit, and T. , Arousal-driven synthesis of laughter. submitted to the, IEEE Journal of Selected Topics in Signal Processing, Special Issue on Statistical Parametric Speech Synthesis, 2014.

J. Urbain, H. Cakmak, and T. Dutoit, Development of hmm-based acoustic laughter synthesis In: Interdisciplinary Workshop on Laughter and other Non-Verbal Vocalisations in Speech, pp.26-27, 2012.

J. Urbain, H. Cakmak, and T. Dutoit, Evaluation of HMM-based laughter synthesis, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 2013.
DOI : 10.1109/ICASSP.2013.6639189

J. Urbain and T. Dutoit, A Phonetic Analysis of Natural Laughter, for Use in Automatic Laughter Processing Systems, Proceedings of the fourth bi-annual International Conference of the HUMAINE Association on Affective Computing and Intelligent Interaction, pp.397-406, 2011.
DOI : 10.1121/1.3139899

J. Urbain, R. Niewiadomski, E. Bevacqua, T. Dutoit, A. Moinet et al., AVLaughterCycle, special Issue: eNTERFACE'09, pp.47-58, 2010.
DOI : 10.1007/s12193-010-0053-1

J. Urbain, R. Niewiadomski, M. Mancini, H. Griffin, H. Huseyin-cakmak et al., Multimodal Analysis of Laughter for an Interactive System, Proceedings of the INTETAIN 2013, 2013.
DOI : 10.1007/978-3-642-34014-7_8

J. Wagner, F. Lingenfelser, T. Baur, I. Damian, F. Kistler et al., The social signal interpretation (ssi) framework -multimodal signal processing and recognition in real-time, Proceedings of the 21st ACM International Conference on Multimedia, pp.21-25, 2013.