Fast convergence of regularized learning in games

Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, Robert E. Schapire

Research output: Contribution to journalConference articlepeer-review

143 Scopus citations

Abstract

We show that natural classes of regularized learning algorithms with a form of recency bias achieve faster convergence rates to approximate efficiency and to coarse correlated equilibria in multiplayer normal form games. When each player in a game uses an algorithm from our class, their individual regret decays at O(T-3/4), while the sum of utilities converges to an approximate optimum at O(T-1)-an improvement upon the worst case O(T-1/2) rates. We show a blackbox reduction for any algorithm in the class to achieve Õ (T-1/2) rates against an adversary, while maintaining the faster rates against algorithms in the class. Our results extend those of Rakhlin and Shridharan [17] and Daskalakis et al. [4], who only analyzed two-player zero-sum games for specific algorithms.

Original languageEnglish (US)
Pages (from-to)2989-2997
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume2015-January
StatePublished - 2015
Externally publishedYes
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: Dec 7 2015Dec 12 2015

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Fast convergence of regularized learning in games'. Together they form a unique fingerprint.

Cite this