Generalization and equilibrium in generative adversarial nets (GANs)

Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, Yi Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

230 Scopus citations

Abstract

Generalization is defined training of generative adversarial network (GAN), and it's shown that generalization is not guaranteed for the popular distances between distributions such as Jensen-Shannon or Wasserstein. In particular, training may appear to be successful and yet the trained distribution may be arbitrarily far from the target distribution in standard metrics. It is shown that generalization does occur for a much weaker metric we call neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator/generator game for a natural training objective (Wasserstein) when generator capacity and training set sizes are moderate. Finally, the above theoretical ideas suggest a new training protocol, mix+GAN, which can be combined with any existing method, and empirically is found to improves some existing GAN protocols out of the box.

Original languageEnglish (US)
Title of host publication34th International Conference on Machine Learning, ICML 2017
PublisherInternational Machine Learning Society (IMLS)
Pages322-349
Number of pages28
ISBN (Electronic)9781510855144
StatePublished - 2017
Event34th International Conference on Machine Learning, ICML 2017 - Sydney, Australia
Duration: Aug 6 2017Aug 11 2017

Publication series

Name34th International Conference on Machine Learning, ICML 2017
Volume1

Other

Other34th International Conference on Machine Learning, ICML 2017
Country/TerritoryAustralia
CitySydney
Period8/6/178/11/17

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software

Fingerprint

Dive into the research topics of 'Generalization and equilibrium in generative adversarial nets (GANs)'. Together they form a unique fingerprint.

Cite this