Near-Optimal Learning of Extensive-Form Games with Imperfect Information

Yu Bai, Chi Jin, Song Mei, Tiancheng Yu

Research output: Contribution to journalConference articlepeer-review

10 Scopus citations

Abstract

This paper resolves the open question of designing near-optimal algorithms for learning imperfect-information extensive-form games from bandit feedback. We present the first line of algorithms that require only Oe((XA + Y B)/ε2) episodes of play to find an ε-approximate Nash equilibrium in two-player zero-sum games, where X, Y are the number of information sets and A, B are the number of actions for the two players. This improves upon the best known sample complexity of Oe((X2A + Y 2B)/ε2) by a factor of Oe(max(X, Y )), and matches the information-theoretic lower bound up to logarithmic factors. We achieve this sample complexity by two new algorithms: Balanced Online Mirror Descent, and Balanced Counterfactual Regret Minimization. Both algorithms rely on novel approaches of integrating balanced exploration policies into their classical counterparts. We also extend our results to learning Coarse Correlated Equilibria in multi-player general-sum games.

Original languageEnglish (US)
Pages (from-to)1337-1382
Number of pages46
JournalProceedings of Machine Learning Research
Volume162
StatePublished - 2022
Event39th International Conference on Machine Learning, ICML 2022 - Baltimore, United States
Duration: Jul 17 2022Jul 23 2022

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Near-Optimal Learning of Extensive-Form Games with Imperfect Information'. Together they form a unique fingerprint.

Cite this