Learning Disentangled Representations with Reference-Based Variational Autoencoders

Abstract : Learning disentangled representations from visual data, where high-level generative factors correspond to independent dimensions of feature vectors, is of importance for many computer vision tasks. Supervised approaches, however, require a significant annotation effort in order to label the factors of interest in a training set. To alleviate the annotation cost, we introduce a learning setting which we refer to as "reference-based disentangling''. Given a pool of unlabelled images, the goal is to learn a representation where a set of target factors are disentangled from others. The only supervision comes from an auxiliary "reference set'' that contains images where the factors of interest are constant. In order to address this problem, we propose reference-based variational autoencoders, a novel deep generative model designed to exploit the weak supervisory signal provided by the reference set. During training, we use the variational inference framework where adversarial learning is used to minimize the objective function. By addressing tasks such as feature learning, conditional image generation or attribute transfer, we validate the ability of the proposed model to learn disentangled representations from minimal supervision.
Type de document :
Pré-publication, Document de travail
2018
Liste complète des métadonnées

https://hal.inria.fr/hal-01896007
Contributeur : Adrià Ruiz <>
Soumis le : lundi 15 octobre 2018 - 16:46:06
Dernière modification le : samedi 20 octobre 2018 - 01:08:47

Fichier

RbVAEs_RUIZ2018_.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01896007, version 1

Collections

Citation

Adrià Ruiz, Oriol Martinez, Xavier Binefa, Jakob Verbeek. Learning Disentangled Representations with Reference-Based Variational Autoencoders. 2018. 〈hal-01896007〉

Partager

Métriques

Consultations de la notice

80

Téléchargements de fichiers

84