Fully Automated Non-Native Speech Recognition Using Confusion-Based Acoustic Model Integration - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2005

Fully Automated Non-Native Speech Recognition Using Confusion-Based Acoustic Model Integration

Résumé

This paper presents a fully automated approach for the recognition of non-native speech based on acoustic model modification. For a native language (L1) and a spoken language (L2), pronunciation variants of the phones of L2 are automatically extracted from an existing non-native database as a confusion matrix with sequences of phones of L1. This is done using L1's and L2's ASR systems. This confusion concept deals with the problem of non existence of match between some L2 and L1 phones. The confusion matrix is then used to modify the acoustic models (HMMs) of L2 phones by integrating corresponding L1 phone models as alternative HMM paths. In this way, no lexicon modification is carried. The modified ASR system achieved an improvement between 32% and 40% (relative, L1=French and L2=English) in WER on the French non-native database used for testing.
Fichier principal
Vignette du fichier
eurospeech2005.pdf (158.34 Ko) Télécharger le fichier
Loading...

Dates et versions

inria-00111920 , version 1 (06-11-2006)

Identifiants

  • HAL Id : inria-00111920 , version 1

Citer

Ghazi Bouselmi, Dominique Fohr, Irina Illina, Jean-Paul Haton. Fully Automated Non-Native Speech Recognition Using Confusion-Based Acoustic Model Integration. Interspeech'2005 - Eurospeech — 9th European Conference on Speech Communication and Technology, Sep 2005, Lisbonne, Portugal. pp.1369-1372. ⟨inria-00111920⟩
165 Consultations
249 Téléchargements

Partager

Gmail Facebook X LinkedIn More