Détection automatique de sons bien réalisés - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2004

Détection automatique de sons bien réalisés

Résumé

Given a phonetic context, sounds can be uttered with more or less salient acoustic cues depending on the speech style and prosody. In a previous work we studied strong acoustic cues of unvoiced stops that enable a very reliable identification of stops. In this paper we use this background idea again with a view of exploiting well realized sounds to enhance speech intelligibility within the framework of language learning. We thus designed an elitist learning of HMM that make very reliable phone models emerge. The learning is iterated by feeding phones identified correctly at the previous iteration into the learning algorithm. In this way models specialize to represent well realized sounds. Experiments were carried out on the BREF 80 corpus by constructing well realized phone models for unvoiced stops. They show that these contextual models triggered off in 60% of stops occurrences with an extremely low confusion rate.
Fichier principal
Vignette du fichier
A04-R-284.pdf (323.94 Ko) Télécharger le fichier

Dates et versions

inria-00099895 , version 1 (26-09-2006)

Identifiants

  • HAL Id : inria-00099895 , version 1

Citer

Yves Laprie, Safaa Jarifi, Anne Bonneau, Dominique Fohr. Détection automatique de sons bien réalisés. Actes des XXVes Journées d'Étude sur la Parole - JEP'2004, 2004, Fès, Maroc, 4 p. ⟨inria-00099895⟩
101 Consultations
57 Téléchargements

Partager

Gmail Facebook X LinkedIn More