From attribute-labels to faces: face generation using a conditional generative adversarial network

Yaohui Wang 1 Antitza Dantcheva 1 Francois Bremond 1
1 STARS - Spatio-Temporal Activity Recognition Systems
CRISAM - Inria Sophia Antipolis - Méditerranée
Abstract : Facial attributes are instrumental in semantically characterizing faces. Automated classification of such attributes (i.e., age, gender, ethnicity) has been a well studied topic. We here seek to explore the inverse problem, namely given attribute-labels the generation of attribute-associated faces. The interest in this topic is fueled by related applications in law enforcement and entertainment. In this work, we propose two models for attribute-label based facial image and video generation incorporating 2D and 3D deep conditional generative adversarial networks (DCGAN). The attribute-labels serve as a tool to determine the specific representations of generated images and videos. While these are early results, our findings indicate the methods' ability to generate realistic faces from attribute labels.
Document type :
Conference papers
Complete list of metadatas

https://hal.inria.fr/hal-01894150
Contributor : Antitza Dantcheva <>
Submitted on : Friday, October 12, 2018 - 10:50:06 AM
Last modification on : Saturday, October 13, 2018 - 1:21:39 AM
Long-term archiving on : Sunday, January 13, 2019 - 1:07:19 PM

File

Wang_Dantcheva_Bremond_ECCVW_1...
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01894150, version 1

Collections

Citation

Yaohui Wang, Antitza Dantcheva, Francois Bremond. From attribute-labels to faces: face generation using a conditional generative adversarial network. ECCVW'18, 5th Women in Computer Vision (WiCV) Workshop in conjunction with the European Conference on Computer Vision, Sep 2018, Munich, Germany. ⟨hal-01894150⟩

Share

Metrics

Record views

107

Files downloads

138