Mixed batches and symmetric discriminators for GAN training

Thomas Lucas 1 Corentin Tallec 2, 3 Jakob Verbeek 1 Yann Ollivier 4
1 Thoth - Apprentissage de modèles à partir de données massives
LJK - Laboratoire Jean Kuntzmann, Inria Grenoble - Rhône-Alpes
3 TAU - TAckling the Underspecified
LRI - Laboratoire de Recherche en Informatique, UP11 - Université Paris-Sud - Paris 11, Inria Saclay - Ile de France, CNRS - Centre National de la Recherche Scientifique : UMR8623
Abstract : Generative adversarial networks (GANs) are powerful generative models based on providing feedback to a generative network via a discriminator network. However, the discriminator usually assesses individual samples. This prevents the dis-criminator from accessing global distributional statistics of generated samples, and often leads to mode dropping: the generator models only part of the target distribution. We propose to feed the discriminator with mixed batches of true and fake samples, and train it to predict the ratio of true samples in the batch. The latter score does not depend on the order of samples in a batch. Rather than learning this invariance, we introduce a generic permutation-invariant discriminator architecture. This architecture is provably a universal approximator of all symmetric functions. Experimentally, our approach reduces mode collapse in GANs on two synthetic datasets, and obtains good results on the CIFAR10 and CelebA datasets, both qualitatively and quantitatively.
Complete list of metadatas

Cited literature [23 references]  Display  Hide  Download

https://hal.inria.fr/hal-01791126
Contributor : Thomas Lucas <>
Submitted on : Thursday, July 5, 2018 - 2:41:00 PM
Last modification on : Wednesday, March 20, 2019 - 1:29:03 AM
Long-term archiving on : Monday, October 1, 2018 - 6:52:41 PM

File

mixed_batches_sym_disc_for_gan...
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01791126, version 2

Citation

Thomas Lucas, Corentin Tallec, Jakob Verbeek, Yann Ollivier. Mixed batches and symmetric discriminators for GAN training. ICML - 35th International Conference on Machine Learning, Jul 2018, Stockholm, Sweden. pp.2844-2853. ⟨hal-01791126v2⟩

Share

Metrics

Record views

619

Files downloads

303