Merging Live and pre-Captured Data to support Full 3D Head Reconstruction for Telepresence

Abstract : This paper proposes a 3D head reconstruction method for low cost 3D telepresence systems that uses only a single consumer level hybrid sensor (color+depth) located in front of the users. Our method fuses the real-time, noisy and incomplete output of a hybrid sensor with a set of static, high-resolution textured models acquired in a calibration phase. A complete and fully textured 3D model of the users' head can thus be reconstructed in real-time, accurately preserving the facial expression of the user. The main features of our method are a mesh interpolation and a fusion of a static and a dynamic textures to combine respectively a better resolution and the dynamic features of the face.
Complete list of metadatas

Cited literature [5 references]  Display  Hide  Download

https://hal.inria.fr/hal-01060128
Contributor : Cédric Fleury <>
Submitted on : Tuesday, September 2, 2014 - 9:48:26 PM
Last modification on : Wednesday, August 21, 2019 - 10:22:07 AM
Long-term archiving on : Wednesday, December 3, 2014 - 11:01:54 AM

File

short1026_EG2014_CR_opt.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Cédric Fleury, Tiberiu Popa, Tat Jen Cham, Henry Fuchs. Merging Live and pre-Captured Data to support Full 3D Head Reconstruction for Telepresence. EG'14, Apr 2014, Strasbourg, France. ⟨10.2312/egsh.20141002⟩. ⟨hal-01060128⟩

Share

Metrics

Record views

945

Files downloads

493