Skip to Main content Skip to Navigation
Conference papers

Omnidirectional texturing of human actors from multiple view video sequences

Alexandrina Orzan 1 Jean-Marc Hasenfratz 1, 2
1 ARTIS - Acquisition, representation and transformations for image synthesis
GRAVIR - IMAG - Laboratoire d'informatique GRAphique, VIsion et Robotique de Grenoble, Inria Grenoble - Rhône-Alpes, CNRS - Centre National de la Recherche Scientifique : FR71
Abstract : In 3D video, recorded object behaviors can be observed from any viewpoint, because the 3D video registers the object's 3D shape and color. However, the real-world views are limited to the views from a number of cameras, so only a coarse model of the object can be recovered in real-time. It becomes then necessary to judiciously texture the object with images recovered from the cameras. One of the problems in multi-texturing is to decide what portion of the 3D model is visible from what camera. We propose a texture-mapping algorithm that tries to bypass the problem of exactly deciding if a point is visible or not from a certain camera. Given more than two color values for each pixel, a statistical test allows to exclude outlying color data before blending.
Document type :
Conference papers
Complete list of metadata

Cited literature [14 references]  Display  Hide  Download
Contributor : Jean-Marc Hasenfratz Connect in order to contact the contributor
Submitted on : Thursday, May 22, 2008 - 10:34:56 AM
Last modification on : Wednesday, September 8, 2021 - 1:50:28 PM
Long-term archiving on: : Friday, May 28, 2010 - 6:16:02 PM


Files produced by the author(s)


  • HAL Id : inria-00281378, version 1




Alexandrina Orzan, Jean-Marc Hasenfratz. Omnidirectional texturing of human actors from multiple view video sequences. Romanian Conference on Computer-Human Interaction, 2005, Cluj-Napoca, Romania. pp.133-136. ⟨inria-00281378⟩



Les métriques sont temporairement indisponibles