Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Layer-wise learning of deep generative models

Ludovic Arnold 1, 2 Yann Ollivier 1, 2
2 TAO - Machine Learning and Optimisation
CNRS - Centre National de la Recherche Scientifique : UMR8623, Inria Saclay - Ile de France, UP11 - Université Paris-Sud - Paris 11, LRI - Laboratoire de Recherche en Informatique
Abstract : When using deep, multi-layered architectures to build generative models of data, it is difficult to train all layers at once. We propose a layer-wise training procedure admitting a performance guarantee compared to the global optimum. It is based on an optimistic proxy of future performance, the best latent marginal. We interpret auto-encoders in this setting as generative models, by showing that they train a lower bound of this criterion. We test the new learning procedure against a state of the art method (stacked RBMs), and find it to improve performance. Both theory and experiments highlight the importance, when training deep architectures, of using an inference model (from data to hidden variables) richer than the generative model (from hidden variables to data).
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-00794302
Contributor : Yann Ollivier <>
Submitted on : Monday, February 25, 2013 - 2:44:05 PM
Last modification on : Tuesday, April 21, 2020 - 1:07:30 AM

Links full text

Identifiers

  • HAL Id : hal-00794302, version 1
  • ARXIV : 1212.1524

Collections

Citation

Ludovic Arnold, Yann Ollivier. Layer-wise learning of deep generative models. 2013. ⟨hal-00794302⟩

Share

Metrics

Record views

317