Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Variational Auto-Encoder: not all failures are equal

Michele Sebag 1 Victor Berger 2 Michèle Sebag 1
2 TAU - TAckling the Underspecified
LRI - Laboratoire de Recherche en Informatique, Inria Saclay - Ile de France
Abstract : We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distribution class used for the observation model. A first theoretical and experimental contribution of the paper is to establish that even in the large sample limit with arbitrarily powerful neural architectures and latent space, the VAE fails if the sharpness of the distribution class does not match the scale of the data. Our second claim is that the distribution sharpness must preferably be learned by the VAE (as opposed to, fixed and optimized offline): Autonomously adjusting this sharpness allows the VAE to dynamically control the trade-off between the optimization of the reconstruction loss and the latent compression. A second empirical contribution is to show how the control of this trade-off is instrumental in escaping poor local optima, akin a simulated annealing schedule. Both claims are backed upon experiments on artificial data, MNIST and CelebA, showing how sharpness learning addresses the notorious VAE blurriness issue.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

Cited literature [27 references]  Display  Hide  Download

https://hal.inria.fr/hal-02497248
Contributor : Victor Berger <>
Submitted on : Tuesday, March 3, 2020 - 3:31:34 PM
Last modification on : Tuesday, April 21, 2020 - 1:09:59 AM
Document(s) archivé(s) le : Thursday, June 4, 2020 - 4:56:49 PM

Files

main.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02497248, version 1
  • ARXIV : 2003.01972

Citation

Michele Sebag, Victor Berger, Michèle Sebag. Variational Auto-Encoder: not all failures are equal. 2020. ⟨hal-02497248⟩

Share

Metrics

Record views

30

Files downloads

233