Skip to Main content Skip to Navigation
Conference papers

From attribute-labels to faces: face generation using a conditional generative adversarial network

Yaohui Wang 1 Antitza Dantcheva 1 Francois Bremond 1 
1 STARS - Spatio-Temporal Activity Recognition Systems
CRISAM - Inria Sophia Antipolis - Méditerranée
Abstract : Facial attributes are instrumental in semantically characterizing faces. Automated classification of such attributes (i.e., age, gender, ethnicity) has been a well studied topic. We here seek to explore the inverse problem, namely given attribute-labels the generation of attribute-associated faces. The interest in this topic is fueled by related applications in law enforcement and entertainment. In this work, we propose two models for attribute-label based facial image and video generation incorporating 2D and 3D deep conditional generative adversarial networks (DCGAN). The attribute-labels serve as a tool to determine the specific representations of generated images and videos. While these are early results, our findings indicate the methods' ability to generate realistic faces from attribute labels.
Document type :
Conference papers
Complete list of metadata

Cited literature [12 references]  Display  Hide  Download
Contributor : Antitza Dantcheva Connect in order to contact the contributor
Submitted on : Friday, October 12, 2018 - 10:50:06 AM
Last modification on : Saturday, June 25, 2022 - 11:32:02 PM
Long-term archiving on: : Sunday, January 13, 2019 - 1:07:19 PM


Files produced by the author(s)


  • HAL Id : hal-01894150, version 1



Yaohui Wang, Antitza Dantcheva, Francois Bremond. From attribute-labels to faces: face generation using a conditional generative adversarial network. ECCVW'18, 5th Women in Computer Vision (WiCV) Workshop in conjunction with the European Conference on Computer Vision, Sep 2018, Munich, Germany. ⟨hal-01894150⟩



Record views


Files downloads