On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems

  • Tianyi Lin
  • , Chi Jin
  • , Michael I. Jordan

Research output: Contribution to journalConference articlepeer-review

34 Scopus citations

Abstract

We consider nonconvex-concave minimax problems, minxmaxy∈Yf(x, y) where f is nonconvex in x but concave in y and Y is a convex and bounded set. One of the most popular algorithms for solving this problem is the celebrated gradient descent ascent (GDA) algorithm, which has been widely used in machine learning, control theory and economics. Despite the extensive convergence results for the convex-concave setting, GDA with equal stepsize can converge to limit cycles or even diverge in a general setting. In this paper, we present the complexity results on two-time-scale GDA for solving nonconvexconcave minimax problems, showing that the algorithm can find a stationary point of the function Φ(·):= maxy∈Yf(·, y) efficiently. To the best our knowledge, this is the first nonasymptotic analysis for two-time-scale GDA in this setting, shedding light on its superior practical performance in training generative adversarial networks (GANs) and other real applications.

Original languageEnglish (US)
JournalProceedings of Machine Learning Research
Volume119
StatePublished - 2020
Externally publishedYes
Event37th International Conference on Machine Learning, ICML 2020 - Virtual, Online
Duration: Jul 13 2020Jul 18 2020

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems'. Together they form a unique fingerprint.

Cite this