Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Synthetic Humans for Action Recognition from Unseen Viewpoints

Gül Varol 1, 2 Ivan Laptev 1 Cordelia Schmid 3, 4 Andrew Zisserman 2
1 WILLOW - Models of visual object recognition and scene understanding
DI-ENS - Département d'informatique de l'École normale supérieure, Inria de Paris
3 Thoth - Apprentissage de modèles à partir de données massives
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann
Abstract : Our goal in this work is to improve the performance of human action recognition for viewpoints unseen during training by using synthetic training data. Although synthetic data has been shown to be beneficial for tasks such as human pose estimation, its use for RGB human action recognition is relatively unexplored. We make use of the recent advances in monocular 3D human body reconstruction from real action sequences to automatically render synthetic training videos for the action labels. We make the following contributions: (i) we investigate the extent of variations and augmentations that are beneficial to improving performance at new viewpoints. We consider changes in body shape and clothing for individuals, as well as more action relevant augmentations such as non-uniform frame sampling, and interpolating between the motion of individuals performing the same action; (ii) We introduce a new dataset, SURREACT, that allows supervised training of spatio-temporal CNNs for action classification; (iii) We substantially improve the state-of-the-art action recognition performance on the NTU RGB+D and UESTC standard human action multi-view benchmarks; Finally, (iv) we extend the augmentation approach to in-the-wild videos from a subset of the Kinetics dataset to investigate the case when only one-shot training data is available, and demonstrate improvements in this case as well.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

https://hal.inria.fr/hal-02435731
Contributor : Gul Varol <>
Submitted on : Saturday, January 11, 2020 - 12:12:30 PM
Last modification on : Friday, July 3, 2020 - 4:52:52 PM

Links full text

Identifiers

  • HAL Id : hal-02435731, version 1
  • ARXIV : 1912.04070

Collections

Citation

Gül Varol, Ivan Laptev, Cordelia Schmid, Andrew Zisserman. Synthetic Humans for Action Recognition from Unseen Viewpoints. 2020. ⟨hal-02435731⟩

Share

Metrics

Record views

134