G3AN: Disentangling Appearance and Motion for Video Generation - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

G3AN: Disentangling Appearance and Motion for Video Generation

Résumé

Creating realistic human videos entails the challenge of being able to simultaneously generate both appearance, as well as motion. To tackle this challenge, we introduce G 3 AN, a novel spatio-temporal generative model, which seeks to capture the distribution of high dimensional video data and to model appearance and motion in disentangled manner. The latter is achieved by decomposing appearance and motion in a three-stream Generator, where the main stream aims to model spatio-temporal consistency, whereas the two auxiliary streams augment the main stream with multi-scale appearance and motion features, respectively. An extensive quantitative and qualitative analysis shows that our model systematically and significantly out-performs state-of-the-art methods on the facial expression datasets MUG and UvA-NEMO, as well as the Weizmann and UCF101 datasets on human action. Additional analysis on the learned latent representations confirms the successful decomposition of appearance and motion. Source code and pre-trained models are publicly available (https://wyhsirius.github.io/G3AN/)
Fichier principal
Vignette du fichier
yaohuiCVPR2020.pdf (3.27 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02969849 , version 1 (16-10-2020)

Identifiants

  • HAL Id : hal-02969849 , version 1

Citer

Yaohui Wang, Piotr Bilinski, Francois F Bremond, Antitza Dantcheva. G3AN: Disentangling Appearance and Motion for Video Generation. CVPR 2020 - IEEE Conference on Computer Vision and Pattern Recognition, Jun 2020, Seattle / Virtual, United States. ⟨hal-02969849⟩
72 Consultations
131 Téléchargements

Partager

Gmail Facebook X LinkedIn More