Learning from Synthetic Humans - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Learning from Synthetic Humans

Résumé

Estimating human pose, shape, and motion from images and videos are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL tasks): a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth pose, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Our results and the new dataset open up new possibilities for advancing person analysis using cheap and large-scale synthetic data.
Fichier principal
Vignette du fichier
VarolCVPR2017.pdf (4.56 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01505711 , version 1 (11-04-2017)

Identifiants

Citer

Gül Varol, Javier J Romero, Xavier Martin, Naureen Mahmood, Michael J. Black, et al.. Learning from Synthetic Humans. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), Jul 2017, Honolulu, United States. pp.4627-4635, ⟨10.1109/CVPR.2017.492⟩. ⟨hal-01505711⟩
812 Consultations
743 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More