Learning Disentangled Representations with Reference-Based Variational Autoencoders - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Learning Disentangled Representations with Reference-Based Variational Autoencoders

Résumé

Learning disentangled representations from visual data, where different high-level generative factors are independently encoded, is of importance for many computer vision tasks. Solving this problem, however, typically requires to explicitly label all the factors of interest in training images. To alleviate the annotation cost, we introduce a learning setting which we refer to as reference-based disentangling. Given a pool of unlabelled images, the goal is to learn a representation where a set of target factors are disentangled from others. The only supervision comes from an auxiliary reference set containing images where the factors of interest are constant. In order to address this problem, we propose reference-based variational autoencoders, a novel deep generative model designed to exploit the weak-supervision provided by the reference set. By addressing tasks such as feature learning, conditional image generation or attribute transfer, we validate the ability of the proposed model to learn disentangled representations from this minimal form of supervision.
Fichier principal
Vignette du fichier
RbVAES_HAL.pdf (11.34 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01896007 , version 1 (15-10-2018)
hal-01896007 , version 2 (23-01-2019)

Identifiants

  • HAL Id : hal-01896007 , version 2

Citer

Adrià Ruiz, Oriol Martinez, Xavier Binefa, Jakob Verbeek. Learning Disentangled Representations with Reference-Based Variational Autoencoders. ICLR workshop on Learning from Limited Labeled Data, May 2019, New Orleans, United States. pp.1-17. ⟨hal-01896007v2⟩
468 Consultations
760 Téléchargements

Partager

Gmail Facebook X LinkedIn More