Skip to Main content Skip to Navigation
Conference papers

MOTHER: A new generation of talking heads providing a flexible articulatory control for video-realistic speech animation

Abstract : This article presents the first version of a talking head, called MOTHER (MOrphable Talking Head for Enhanced Reality), based on an articulatory model describing the degrees-offreedom of visible (lips, cheeks...) but also partially or indirectly visible (jaw, tongue...) speech articulators. Skin details are rendered using texture mapping/blending techniques. We illustrate here the flexibility of such an articulatory control of video-realistic speaking faces by first demonstrating its ability in tracking facial movements by an optical-to-articulatory inversion using an analysis-by-synthesis technique. The stability and reliability of the results allow the automatic inversion of large video sequences. Inversion results are here used to build automatically a coarticulation model for the generation of facial movements from text. It improves the previous Text-To- AudioVisual-Speech (TTAVS) synthesizer developed at the ICP both in terms of the accuracy and realism.
Document type :
Conference papers
Complete list of metadata

Cited literature [11 references]  Display  Hide  Download


https://hal.inria.fr/inria-00389362
Contributor : Lionel Reveret <>
Submitted on : Thursday, May 28, 2009 - 4:00:46 PM
Last modification on : Tuesday, July 27, 2021 - 3:54:02 PM
Long-term archiving on: : Thursday, June 10, 2010 - 11:59:27 PM

Files

Identifiers

  • HAL Id : inria-00389362, version 1

Collections

CNRS | ICP | UGA

Citation

Lionel Reveret, Gérard Bailly, Pierre Badin. MOTHER: A new generation of talking heads providing a flexible articulatory control for video-realistic speech animation. Int. Conference of Spoken Language Processing, ICSLP'2000, Oct 2000, Pekin, China. ⟨inria-00389362⟩

Share

Metrics

Record views

805

Files downloads

756