BodyNet: Volumetric Inference of 3D Human Body Shapes - Archive ouverte HAL Access content directly
Conference Papers Year :

BodyNet: Volumetric Inference of 3D Human Body Shapes

(1, 2) , (3) , (3) , (3) , (3) , (1) , (2)
1
2
3

Abstract

Human shape estimation is an important task for video editing , animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric body-part segmentation.
Fichier principal
Vignette du fichier
VarolECCV2018.pdf (9.06 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01852169 , version 2 (18-08-2018)

Identifiers

Cite

Gül Varol, Duygu Ceylan, Bryan Russell, Jimei Yang, Ersin Yumer, et al.. BodyNet: Volumetric Inference of 3D Human Body Shapes. ECCV 2018 - 15th European Conference on Computer Vision, Sep 2018, Munich, Germany. pp.20-38, ⟨10.1007/978-3-030-01234-2_2⟩. ⟨hal-01852169⟩
507 View
375 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More