An Effective Lip Tracking Algorithm for Acoustic-to-Articulatory Inversion - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2004

An Effective Lip Tracking Algorithm for Acoustic-to-Articulatory Inversion

Résumé

Although automatic speech recognition systems can now perform well under certain conditions, they still don't provide good results in real life conditions, especially in noisy environments. Several authors have suggested that using articulatory features rather than acoustic features as a basis for speech parameterization would help yield better recognition results. The articulatory features can be recovered from the speech signal by acoustic-to-articulatory inversion. Given the acoustic signal, the recovery of the articulatory state is considered difficult. The reason is the "one-to-many" nature of the acoustic-toarticulatory inversion problem: a given articulatory state has always only one acoustic realization but an acoustic signal can be the outcome of more than one articulatory states. Since visual information is complementary to acoustic information in the inversion, lip tracking is proposed in this paper to provide visual information of lip movement for the acoustic-to-articulatory inversion. Encouraging results have proven the effectiveness of this method which provides useful information (i.e. mouth width and height) for inversion.

Domaines

Autre [cs.OH]
Fichier principal
Vignette du fichier
A04-R-336.pdf (150.72 Ko) Télécharger le fichier

Dates et versions

inria-00099905 , version 1 (26-09-2006)

Identifiants

  • HAL Id : inria-00099905 , version 1

Citer

Jingying Chen, Marie-Odile Berger, Yves Laprie. An Effective Lip Tracking Algorithm for Acoustic-to-Articulatory Inversion. 5th International Workshop on Image Analysis for Multimedia - WIAMIS'2004, Apr 2004, Lisbon, Portugal, 3 p. ⟨inria-00099905⟩
118 Consultations
67 Téléchargements

Partager

Gmail Facebook X LinkedIn More