Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Quasi-Symplectic Langevin Variational Au-Toencoder

Abstract : Variational autoencoder (VAE) as one of the well-investigated generative model is very popular in nowadays neural learning research works. To leverage VAE in practical tasks which have high dimensions and massive dataset often facing the problem of low variance evidence lower bounds construction. Markov chain Monte Carlo (MCMC) is an effective approach to tight the evidence lower bound (ELBO) for approximating the posterior distribution. Hamiltonian Variational Autoencoder (HVAE) is one of those effective MCMC inspired approaches for constructing the low-variance ELBO which is also amenable for reparameterization trick. The solution significantly improves the performance of the posterior estimation, yet, a main drawback of HVAE is the leapfrog method need to access the posterior gradient twice which leads to bad inference efficiency and the GPU memory requirement is fair large. This flaw limited the application of Hamiltonian based inference framework for large scale networks inference. To tackle this problem, we propose a Quasi-symplectic Langevin Variational autoencoder (Langevin-VAE), which can be a significant improvement over resource usage efficiency. We qualitatively and quantitatively demonstrate the effectiveness of the Langevin-VAE compared to the state-of-art gradients informed inference framework.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

https://hal.inria.fr/hal-03024748
Contributor : Zihao Wang <>
Submitted on : Thursday, November 26, 2020 - 3:48:15 AM
Last modification on : Monday, January 4, 2021 - 1:47:54 PM

File

ICLR_Langevin_VAE.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03024748, version 1

Collections

Citation

Zihao Wang, Hervé Delingette. Quasi-Symplectic Langevin Variational Au-Toencoder. 2020. ⟨hal-03024748⟩

Share

Metrics

Record views

7

Files downloads

95