Détection automatique de sons bien réalisés - Archive ouverte HAL Access content directly
Conference Papers Year : 2004

Détection automatique de sons bien réalisés

(1) , (1) , (1) , (1)
1

Abstract

Given a phonetic context, sounds can be uttered with more or less salient acoustic cues depending on the speech style and prosody. In a previous work we studied strong acoustic cues of unvoiced stops that enable a very reliable identification of stops. In this paper we use this background idea again with a view of exploiting well realized sounds to enhance speech intelligibility within the framework of language learning. We thus designed an elitist learning of HMM that make very reliable phone models emerge. The learning is iterated by feeding phones identified correctly at the previous iteration into the learning algorithm. In this way models specialize to represent well realized sounds. Experiments were carried out on the BREF 80 corpus by constructing well realized phone models for unvoiced stops. They show that these contextual models triggered off in 60% of stops occurrences with an extremely low confusion rate.
Fichier principal
Vignette du fichier
A04-R-284.pdf (323.94 Ko) Télécharger le fichier

Dates and versions

inria-00099895 , version 1 (26-09-2006)

Identifiers

  • HAL Id : inria-00099895 , version 1

Cite

Yves Laprie, Safaa Jarifi, Anne Bonneau, Dominique Fohr. Détection automatique de sons bien réalisés. Actes des XXVes Journées d'Étude sur la Parole - JEP'2004, 2004, Fès, Maroc, 4 p. ⟨inria-00099895⟩
98 View
43 Download

Share

Gmail Facebook Twitter LinkedIn More