K. R. Scherer, Vocal affect expression: A review and a model for future research., Psychological Bulletin, vol.99, issue.2, p.143, 1986.
DOI : 10.1037/0033-2909.99.2.143

K. R. Scherer and H. G. Wallbott, "Evidence for universality and cultural variation of differential emotion response patterning": Correction., Journal of Personality and Social Psychology, vol.67, issue.1, p.310, 1994.
DOI : 10.1037/0022-3514.67.1.55

C. Monzo, F. Alías, I. Iriondo, X. Gonzalvo, and S. Planet, Discriminating expressive speech styles by voice quality parameterization, Proc. of ICPhS, 2007.

Y. Morlec, G. Bailly, and V. Aubergé, Generating prosodic attitudes in French: Data, model and evaluation, Speech Communication, vol.33, issue.4, pp.357-371, 2001.
DOI : 10.1016/S0167-6393(00)00065-0

P. Oudeyer, Erratum to: ???The production and recognition of emotions in speech: features and algorithms???, International Journal of Human-Computer Studies, vol.62, issue.3, pp.157-183, 2003.
DOI : 10.1016/j.ijhcs.2004.10.001

I. Iriondo, S. Planet, J. Socoró, and F. Alías, Objective and Subjective Evaluation of an Expressive Speech Corpus, Advances in Nonlinear Speech Processing, pp.86-94, 2007.
DOI : 10.1007/978-3-540-77347-4_5

H. Mixdorff, A. Hönemann, and A. Rilliard, Acousticprosodic analysis of attitudinal expressions in german, Proceedings of Interspeech 2015, 2015.

H. P. Graf, E. Cosatto, V. Strom, and F. J. Huang, Visual prosody: facial movements accompanying speech, Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition, p.396, 2002.
DOI : 10.1109/AFGR.2002.1004186

C. Busso, Z. Deng, M. Grimm, U. Neumann, and S. Narayanan, Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis, Audio, Speech, and Language Processing, pp.1075-1086, 2007.
DOI : 10.1109/TASL.2006.885910

A. Barbulescu, R. Ronfard, G. Bailly, G. Gagneré, and H. Cakmak, Beyond basic emotions, Proceedings of the Seventh International Conference on Motion in Games, MIG '14, pp.39-47, 2014.
DOI : 10.1145/2668064.2668084

URL : https://hal.archives-ouvertes.fr/hal-01064989

K. Ruhland, S. Andrist, J. Badler, C. Peters, N. Badler et al., Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems, Eurographics 2014 - State of the Art Reports, pp.69-91, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01025241

C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee et al., Analysis of emotion recognition using facial expressions, speech and multimodal information, Proceedings of the 6th international conference on Multimodal interfaces , ICMI '04, pp.205-211, 2004.
DOI : 10.1145/1027933.1027968

R. Kaliouby and P. Robinson, Real-time inference of complex mental states from facial expressions and head gestures, " in Real-time vision for human-computer interaction, pp.181-200, 2005.

S. Marsella, Y. Xu, M. Lhommet, A. Feng, S. Scherer et al., Virtual character performance from speech, Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA '13, pp.25-35, 2013.
DOI : 10.1145/2485895.2485900

C. Davis, J. Kim, V. Aubanel, G. Zelic, and Y. Mahajan, The stability of mouth movements for multiple talkers over multiple sessions, Proceedings of the 2015 FAAVSP

R. E. Kaliouby and P. Robinson, Mind reading machines: automated inference of cognitive mental states from video, 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583), pp.682-688, 2004.
DOI : 10.1109/ICSMC.2004.1398380

E. Chuang and C. Bregler, Mood swings: expressive speech animation, ACM Transactions on Graphics, vol.24, issue.2, pp.331-347, 2005.
DOI : 10.1145/1061347.1061355

S. E. Kahou, C. Pal, X. Bouthillier, P. Froumenty, C. ¸. Gülçehre et al., Combining modality specific deep neural networks for emotion recognition in video, Proceedings of the 15th ACM on International conference on multimodal interaction, pp.543-550, 2013.

Y. Kim, H. Lee, and E. M. Provost, Deep learning for robust feature generation in audiovisual emotion recognition, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp.3687-3691, 2013.
DOI : 10.1109/ICASSP.2013.6638346

I. Fónagy, E. Bérard, and J. Fónagy, CLICH??S M??LODIQUES, Folia Linguistica, vol.17, issue.1-4, pp.153-186, 1983.
DOI : 10.1515/flin.1983.17.1-4.153

D. Bolinger, Intonation and its uses: Melody in grammar and discourse, 1989.

A. Barbulescu, G. Bailly, R. Ronfard, and M. Pouget, Audiovisual generation of social attitudes from neutral stimuli, Facial Analysis, Animation and Auditory-Visual Speech Processing, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01178056

I. Iriondo, S. Planet, J. Socoró, E. Martínez, F. Alías et al., Automatic refinement of an expressive speech corpus assembling subjective perception and automatic classification, Speech Communication, vol.51, issue.9, pp.744-758, 2009.
DOI : 10.1016/j.specom.2008.12.001

URL : https://hal.archives-ouvertes.fr/hal-00550285

S. Baron-cohen, Mind reading: the interactive guide to emotions, 2003.

R. Queneau, Exercises in style, 2013.

P. Boersma, Praat, a system for doing phonetics by computer, pp.341-345, 2002.

W. N. Campbell, Syllable-based segmental duration Talking machines: Theories, models, and designs, pp.211-224, 1992.

G. Bailly and B. Holm, SFC: A trainable prosodic model, Speech Communication, vol.46, issue.3-4, pp.348-364, 2005.
DOI : 10.1016/j.specom.2005.04.008

URL : https://hal.archives-ouvertes.fr/hal-00416724

H. Kawahara, STRAIGHT, exploitation of the other aspect of VOCODER: Perceptually isomorphic decomposition of speech sounds, Acoustical Science and Technology, vol.27, issue.6, pp.349-353, 2006.
DOI : 10.1250/ast.27.349

Y. Li and A. Ngom, The non-negative matrix factorization toolbox for biological data mining, Source Code for Biology and Medicine, vol.8, issue.1, 2013.
DOI : 10.1186/1751-0473-8-10

A. Adams, M. Mahmoud, T. Baltru?aitis, and P. Robinson, Decoupling facial expressions and head motions in complex emotions, 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp.274-280, 2015.
DOI : 10.1109/ACII.2015.7344583