Skip to Main content Skip to Navigation
Conference papers

On the quality of an expressive audiovisual corpus: a case study of acted speech

Slim Ouni 1, 2 Sara Dahmani 2 Vincent Colotte 2
2 MULTISPEECH - Speech Modeling for Facilitating Oral-Based Communication
Inria Nancy - Grand Est, LORIA - NLPKD - Department of Natural Language Processing & Knowledge Discovery
Abstract : In the context of developing an expressive audiovisual speech synthesis system, the quality of the audiovisual corpus from which the 3D visual data will be extracted is important. In this paper, we present a perceptive case study on the quality of the expressiveness of a set of emotions acted by a semi-professional actor. We have analyzed the production of this actor pronouncing a set of sentences with acted emotions, during a human emotion-recognition task. We have observed different modalities: audio, real video, 3D-extracted data, as unimodal presentations and bimodal presentations (with audio). The results of this study show the necessity of such perceptive evaluation prior to further exploitation of the data for the synthesis system. The comparison of the modalities shows clearly what the emotions are, that need to be improved during production and how audio and visual components have a strong mutual influence on emotional perception.
Complete list of metadata

Cited literature [11 references]  Display  Hide  Download
Contributor : Slim Ouni Connect in order to contact the contributor
Submitted on : Wednesday, September 27, 2017 - 8:08:13 PM
Last modification on : Wednesday, November 3, 2021 - 7:09:39 AM
Long-term archiving on: : Thursday, December 28, 2017 - 2:20:41 PM


Files produced by the author(s)


  • HAL Id : hal-01596614, version 1



Slim Ouni, Sara Dahmani, Vincent Colotte. On the quality of an expressive audiovisual corpus: a case study of acted speech. The 14th International Conference on Auditory-Visual Speech Processing, KTH, Aug 2017, Stockholm, Sweden. ⟨hal-01596614⟩



Les métriques sont temporairement indisponibles