R. Queneau, Exercises in style, 2013.

A. Barbulescu, T. Hueber, G. Bailly, and R. Ronfard, Audiovisual speaker conversion using prosody features, International Conference on Auditory-Visual Speech Processing, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00842928

K. R. Scherer, Vocal affect expression: A review and a model for future research., Psychological Bulletin, vol.99, issue.2, p.143, 1986.
DOI : 10.1037/0033-2909.99.2.143

D. Bolinger, Intonation and its uses: Melody in grammar and discourse, 1989.

Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, A survey of affect recognition methods, Proceedings of the ninth international conference on Multimodal interfaces , ICMI '07, pp.39-58, 2009.
DOI : 10.1145/1322192.1322216

J. Vroomen, R. Collier, and S. J. Mozziconacci, Duration and intonation in emotional speech, Eurospeech, 1993.

J. Tao, Y. Kang, and A. Li, Prosody conversion from neutral speech to emotional speech, Audio, Speech, and Language Processing, pp.1145-1154, 2006.

Z. Inanoglu and S. Young, A system for transforming the emotion in speech: combining data-driven conversion techniques for prosody and voice quality, INTERSPEECH, pp.490-493, 2007.

S. Mori, T. Moriyama, and S. Ozawa, Emotional Speech Synthesis using Subspace Constraints in Prosody, 2006 IEEE International Conference on Multimedia and Expo, pp.1093-1096, 2006.
DOI : 10.1109/ICME.2006.262725

C. Wu, C. Hsia, C. Lee, and M. Lin, Hierarchical prosody conversion using regression-based clustering for emotional speech synthesis, Audio, Speech, and Language Processing, pp.1394-1405, 2010.

Y. Stylianou, O. Cappe, and E. Moulines, Statistical methods for voice quality transformation, EUROSPEECH, 1995.

T. Toda, A. W. Black, and K. Tokuda, Voice Conversion Based on Maximum-Likelihood Estimation of Spectral Parameter Trajectory, Audio, Speech, and Language Processing, pp.2222-2235, 2007.
DOI : 10.1109/TASL.2007.907344

]. R. Aihara, R. Takashima, T. Takiguchi, and Y. Ariki, GMM-Based Emotional Voice Conversion Using Spectrum and Prosody Features, American Journal of Signal Processing, vol.2, issue.5, pp.134-138, 2012.
DOI : 10.5923/j.ajsp.20120205.06

E. Chuang and C. Bregler, Performance driven facial animation using blendshape interpolation, Computer Science Technical Report, vol.2, issue.2, p.3, 2002.

D. Vlasic, M. Brand, H. Pfister, and J. Popovi´cpopovi´c, Face transfer with multilinear models, ACM Transactions on Graphics, vol.24, issue.3, pp.426-433, 2005.
DOI : 10.1145/1073204.1073209

Y. Cao, P. Faloutsos, E. Kohler, and F. Pighin, Real-time speech motion synthesis from recorded motions, Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation , SCA '04, pp.345-353, 2004.
DOI : 10.1145/1028523.1028570

E. Chuang and C. Bregler, Mood swings: expressive speech animation, ACM Transactions on Graphics, vol.24, issue.2, pp.331-347, 2005.
DOI : 10.1145/1061347.1061355

H. Yehia, T. Kuratate, and E. Vatikiotis-bateson, Facial animation and head motion driven by speech acoustics, 5th Seminar on Speech Production: Models and Data. Kloster Seeon, pp.265-268, 2000.

C. Busso, Z. Deng, M. Grimm, U. Neumann, and S. Narayanan, Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis, Audio, Speech, and Language Processing, pp.1075-1086, 2007.
DOI : 10.1109/TASL.2006.885910

Y. Morlec, G. Bailly, and V. Aubergé, Generating prosodic attitudes in French: Data, model and evaluation, Speech Communication, vol.33, issue.4, pp.357-371, 2001.
DOI : 10.1016/S0167-6393(00)00065-0

I. Fónagy, E. Bérard, and J. Fónagy, CLICH??S M??LODIQUES, Folia Linguistica, vol.17, issue.1-4, pp.153-186, 1983.
DOI : 10.1515/flin.1983.17.1-4.153

V. Aubergé and G. Bailly, Generation of intonation: a global approach, EUROSPEECH, 1995.

G. Bailly and B. Holm, SFC: A trainable prosodic model, Speech Communication, vol.46, issue.3-4, pp.348-364, 2005.
DOI : 10.1016/j.specom.2005.04.008

URL : https://hal.archives-ouvertes.fr/hal-00416724

S. Baron-cohen, Mind reading: the interactive guide to emotions, 2003.

G. Bailly, T. Barbe, and H. Wang, Automatic labeling of large prosodic databases: Tools, methodology and links with a textto-speech system, The ESCA Workshop on Speech Synthesis, 1991.

P. Boersma, Praat, a system for doing phonetics by computer, pp.341-345, 2002.

P. Barbosa and G. Bailly, Characterisation of rhythmic patterns for text-to-speech synthesis, Speech Communication, vol.15, issue.1-2, pp.127-137, 1994.
DOI : 10.1016/0167-6393(94)90047-7

E. Moulines and F. Charpentier, Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones, Speech Communication, vol.9, issue.5-6, pp.453-467, 1990.
DOI : 10.1016/0167-6393(90)90021-Z

D. J. Berndt and J. Clifford, Using dynamic time warping to find patterns in time series, KDD workshop, pp.359-370, 1994.

A. Barbulescu, R. Ronfard, G. Bailly, G. Gagneré, and H. Cakmak, Beyond basic emotions, Proceedings of the Seventh International Conference on Motion in Games, MIG '14, pp.39-47, 2014.
DOI : 10.1145/2668064.2668084

URL : https://hal.archives-ouvertes.fr/hal-01064989

M. Pouget, T. Hueber, G. Bailly, and T. Baumann, Hmm training strategy for incremental speech synthesis, Interspeech, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01228889