Abstract : The two key players in Generative Adversarial Networks (GANs), the discriminator and generator, are usually parameterized as deep neural networks (DNNs). On many generative tasks, GANs achieve state-of-the-art performance but are often unstable to train and sometimes miss modes. A typical failure mode is the collapse of the generator to a single parameter configuration where its outputs are identical. When this collapse occurs, the gradient of the discriminator may point in similar directions for many similar points. We hypothesize that some of these shortcomings are in part due to primitive and redundant features extracted by discriminator and this can easily make the training stuck. We present a novel approach for regularizing adversarial models by enforcing diverse feature learning. In order to do this, both generator and discriminator are regularized by penalizing both negatively and positively correlated features according to their differentiation and based on their relative cosine distances. In addition to the gradient information from the adversarial loss made available by the discriminator, diversity regularization also ensures that a more stable gradient is provided to update both the generator and discriminator. Results indicate our regularizer enforces diverse features, stabilizes training, and improves image synthesis.
https://hal.inria.fr/hal-02331296
Contributor : Hal Ifip <>
Submitted on : Thursday, October 24, 2019 - 12:49:56 PM Last modification on : Thursday, October 24, 2019 - 12:54:45 PM Long-term archiving on: : Saturday, January 25, 2020 - 3:01:55 PM
File
Restricted access
To satisfy the distribution rights of the publisher, the document is embargoed
until : 2022-01-01
Babajide Ayinde, Keishin Nishihama, Jacek Zurada. Diversity Regularized Adversarial Deep Learning. 15th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI), May 2019, Hersonissos, Greece. pp.292-306, ⟨10.1007/978-3-030-19823-7_24⟩. ⟨hal-02331296⟩