Real-life Emotion Detection from Speech in Human-Robot Interaction: Experiments across Diverse Corpora with Child and Adult Voices

Abstract : We focus in this paper on the detection of the emotions in the voice of a speaker in a Human-Robot Interaction context. This work is part of the ROMEO project, which aims to design a robot for both elderly people and children. Our system offers several modules based on a multi-level processing of the audio cues. The affective markers produced by these different modules will allow to pilot the emotional behaviour of the robot. Since the models are built with recording data and the system will test real-life data, we need to estimate our emotion detection system performances in cross-corpus situations. Cross-validation experiments on a three class detection show that derivatives and energy features may be removed from our feature set for this specific task. Cross-corpora experiments on anger-positive-neutral data suggest that detection performances may be better with two different models: one for child voices, one for adult voices.
Document type :
Conference papers
Complete list of metadatas

Cited literature [12 references]  Display  Hide  Download

https://hal.inria.fr/hal-01404151
Contributor : Marie Tahon <>
Submitted on : Monday, November 28, 2016 - 2:13:02 PM
Last modification on : Monday, September 16, 2019 - 11:45:23 AM
Long-term archiving on : Tuesday, March 21, 2017 - 6:17:31 AM

File

Interspeech_Tahon_2011.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01404151, version 1

Citation

Marie Tahon, Agnes Delaborde, Laurence Devillers. Real-life Emotion Detection from Speech in Human-Robot Interaction: Experiments across Diverse Corpora with Child and Adult Voices. Interspeech, 2011, Firenze, Italy. ⟨hal-01404151⟩

Share

Metrics

Record views

569

Files downloads

254