Adapting visual data to a linear articulatory model

Yves Laprie 1 Blaise Potard 1
1 PAROLE - Analysis, perception and recognition of speech
INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
Abstract : The goal of this work is to investigate audiovisual-to-articulatory inversion. It is well established that acoustic-to-articulatory inversion is an underdetermined problem. On the other hand, there is strong evidence that human speakers/listeners exploit the multimodality of speech, and more particularly the articulatory cues: the view of visible articulators, i.e. jaw and lips, improves speech intelligibility. It is thus interesting to add constraints provided by the direct visual observation of the speaker's face. Visible data was obtained by stereo-vision and enable the 3D recovery of jaw and lip movements. These data were processed to fit the nature of parameters of Maeda's articulatory model. Inversion experiments were conducted.
Document type :
Conference papers
Complete list of metadatas

Cited literature [9 references]  Display  Hide  Download

https://hal.inria.fr/inria-00112223
Contributor : Blaise Potard <>
Submitted on : Tuesday, November 7, 2006 - 7:16:56 PM
Last modification on : Thursday, January 11, 2018 - 6:19:56 AM
Long-term archiving on : Tuesday, April 6, 2010 - 9:50:35 PM

Identifiers

  • HAL Id : inria-00112223, version 1

Collections

Citation

Yves Laprie, Blaise Potard. Adapting visual data to a linear articulatory model. 7th International Seminar on Speech Production - ISSP 2006, Dec 2006, Sao Paulo/Brazil. ⟨inria-00112223⟩

Share

Metrics

Record views

308

Files downloads

105