Real-life Emotion Detection from Speech in Human-Robot Interaction: Experiments across Diverse Corpora with Child and Adult Voices

Abstract : We focus in this paper on the detection of the emotions in the voice of a speaker in a Human-Robot Interaction context. This work is part of the ROMEO project, which aims to design a robot for both elderly people and children. Our system offers several modules based on a multi-level processing of the audio cues. The affective markers produced by these different modules will allow to pilot the emotional behaviour of the robot. Since the models are built with recording data and the system will test real-life data, we need to estimate our emotion detection system performances in cross-corpus situations. Cross-validation experiments on a three class detection show that derivatives and energy features may be removed from our feature set for this specific task. Cross-corpora experiments on anger-positive-neutral data suggest that detection performances may be better with two different models: one for child voices, one for adult voices.
Type de document :
Communication dans un congrès
Interspeech, 2011, Firenze, Italy
Liste complète des métadonnées

Littérature citée [12 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01404151
Contributeur : Marie Tahon <>
Soumis le : lundi 28 novembre 2016 - 14:13:02
Dernière modification le : jeudi 5 avril 2018 - 12:30:23
Document(s) archivé(s) le : mardi 21 mars 2017 - 06:17:31

Fichier

Interspeech_Tahon_2011.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01404151, version 1

Collections

Citation

Marie Tahon, Agnes Delaborde, Laurence Devillers. Real-life Emotion Detection from Speech in Human-Robot Interaction: Experiments across Diverse Corpora with Child and Adult Voices. Interspeech, 2011, Firenze, Italy. 〈hal-01404151〉

Partager

Métriques

Consultations de la notice

348

Téléchargements de fichiers

122