Efficient regret minimization in non-convex games

Elad Hazan, Karan Singh, Cyril Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

We consider regret minimization in repeated games with non-convex loss functions. Minimizing the standard notion of regret is computationally intractable. Thus, we define a natural notion of regret which permits efficient optimization and generalizes offline guarantees for convergence to an approximate local optimum. We give gradient-based methods that achieve optimal regret, which in turn guarantee convergence to equilibrium in this framework.

Original languageEnglish (US)
Title of host publication34th International Conference on Machine Learning, ICML 2017
PublisherInternational Machine Learning Society (IMLS)
Pages2278-2288
Number of pages11
ISBN (Electronic)9781510855144
StatePublished - Jan 1 2017
Event34th International Conference on Machine Learning, ICML 2017 - Sydney, Australia
Duration: Aug 6 2017Aug 11 2017

Publication series

Name34th International Conference on Machine Learning, ICML 2017
Volume3

Other

Other34th International Conference on Machine Learning, ICML 2017
CountryAustralia
CitySydney
Period8/6/178/11/17

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software

Cite this

Hazan, E., Singh, K., & Zhang, C. (2017). Efficient regret minimization in non-convex games. In 34th International Conference on Machine Learning, ICML 2017 (pp. 2278-2288). (34th International Conference on Machine Learning, ICML 2017; Vol. 3). International Machine Learning Society (IMLS).