E. Vatikiotis-bateson, K. G. Munhall, Y. Kasahara, F. Garcia, and H. Yehia, Characterizing audiovisual information during speech, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96, pp.1485-1488, 1996.
DOI : 10.1109/ICSLP.1996.607897

B. Granstrom, D. House, and M. Lundeberg, Prosodic cues in multimodal speech perception, ICPhS, pp.655-658, 1999.

K. G. Munhall, J. A. Jones, D. E. Callan, T. Kuratate, and E. Vatikiotis-bateson, Visual Prosody and Speech Intelligibility: Head Movement Improves Auditory Speech Perception, Psychological Science, vol.11, issue.2, pp.133-137, 2004.
DOI : 10.1016/S0167-6393(98)00048-X

URL : http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.324.1159

M. Swerts and E. Krahmer, Facial expression and prosodic prominence: Effects of modality and facial area, Journal of Phonetics, vol.36, issue.2, pp.219-238, 2008.
DOI : 10.1016/j.wocn.2007.05.001

C. Cave, I. Guaitella, R. Bertrand, S. Santi, F. Harlay et al., About the relationship between eyebrow movements and Fo variations, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96, pp.2175-2178, 1996.
DOI : 10.1109/ICSLP.1996.607235

J. Beskow, B. Granstrom, and D. House, Visual correlates to prominence in several expressive modes, Proc. Interspeech, p.12721275, 2006.

R. Cowie, E. Douglas-cowie, N. Tsapatsoulis, G. Votsis, S. Kollias et al., Emotion recognition in human-computer interaction, IEEE Signal Processing Magazine, vol.18, issue.1, pp.32-80, 2001.
DOI : 10.1109/79.911197

B. Schuller, G. Rigoll, and M. Lang, Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine-belief network architecture, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2004.
DOI : 10.1109/ICASSP.2004.1326051

M. Ayadi, M. S. Kamel, and F. Karray, Survey on speech emotion recognition: Features, classification schemes, and databases, Pattern Recognition, vol.44, issue.3, pp.572-587, 2011.
DOI : 10.1016/j.patcog.2010.09.020

I. R. Murray and J. L. Arnott, Implementation and testing of a system for producing emotion-by-rule in synthetic speech, Speech Communication, vol.16, issue.4, pp.369-390, 1995.
DOI : 10.1016/0167-6393(95)00005-9

J. Tao, Y. Kang, and A. Li, Prosody conversion from neutral speech to emotional speech, Audio, Speech, and Language Processing, pp.1145-1154, 2006.

M. Schröder, Affective Information Processing

K. R. Scherer, A cross-cultural investigation of emotion inferences from voice and speech: implications for speech technology, INTERSPEECH, pp.379-382, 2000.

I. R. Murray and J. L. Arnott, Applying an analysis of acted vocal emotions to improve the simulation of synthetic speech, Computer Speech & Language, vol.22, issue.2, pp.107-129, 2008.
DOI : 10.1016/j.csl.2007.06.001