Skip to Main content Skip to Navigation

Multi-view inpainting, segmentation and video blending, for more versatile Image Based Rendering

Theo Thonat 1 
1 GRAPHDECO - GRAPHics and DEsign with hEterogeneous COntent
CRISAM - Inria Sophia Antipolis - Méditerranée
Abstract : Creating realistic images with the traditional rendering pipeline requires tedious work, starting with complex manual work to create 3D models, materials, and lighting, and then computationally expensive realistic rendering. Such a process requires both skilled artists and significant computing power. Image Based Rendering (IBR) is an alternative way to create high quality content by only using an unstructured set of photos as input. IBR allows casual users to create and render realistic and immersive scenes in real time, for applications such as virtual tourism, cultural heritage, interactive mapping, urban and architecture planning, and movie production. Existing IBR methods produce generally good image quality, but still suffer from limitations. First, many types of scene content produce visually-unappealing rendering artifacts, because the underlying scene representation is insufficient, e.g, for reflective surfaces, thin structures, and dynamic content. Second, scenes are often captured with real- world constraints which require editing to meet the user requirements, yet existing IBR methods do not allow this. To address editing, we propose to extend single image inpainting to allow sparse multiview object removal. Such inpainting requires to hallucinating both color and geometry behind the object to be removed in a multi-view coherent fashion. Our method reduces rendering artifacts by removing objects which are not well represented by IBR methods or by moving well represented objects in the scene. To address rendering quality, we enlarge the scope of casual IBR in two different ways. First we deal with the case of thin structures, which are extremely challenging for multi-view 3D reconstruction and represent a major limitation for IBR in an urban context. We propose a pipeline which locates and renders thin structures supported by simple surfaces. We introduce both a multi-view segmentation algorithm for thin structures, and a rendering method which extends traditional IBR with transparency information. Second, we propose an approach to extend IBR to dynamic contents. By focusing on time-dependent stochastic textures, we preserve both the casual capture setup and the free-viewpoint navigation of the rendered scene. Our key insight is to use a video representation which is adapted to video looping and spatio-temporal blending. Our results for all methods show improved visual quality compared to previous solutions on a variety of input scenes.
Complete list of metadata

Cited literature [143 references]  Display  Hide  Download
Contributor : Team Reves Connect in order to contact the contributor
Submitted on : Wednesday, December 18, 2019 - 12:00:00 PM
Last modification on : Sunday, August 2, 2020 - 9:40:39 AM
Long-term archiving on: : Thursday, March 19, 2020 - 5:29:23 PM


thesis _thonat_small.pdf
Files produced by the author(s)


  • HAL Id : tel-02417599, version 1



Theo Thonat. Multi-view inpainting, segmentation and video blending, for more versatile Image Based Rendering. Graphics [cs.GR]. Université Côte d'Azur, 2019. English. ⟨tel-02417599v1⟩



Record views


Files downloads