Learning With Delayed Payoffs in Population Games Using Kullback–Leibler Divergence Regularization

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

We study a multiagent decision problem in large population games. Agents from multiple populations select strategies for repeated interactions with one another. At each stage of these interactions, agents use their decision-making model to revise their strategy selections based on payoffs determined by an underlying game. Their goal is to learn the strategies that correspond to the Nash equilibrium of the game. However, when games are subject to time delays, conventional decision-making models from the population game literature may result in oscillations in the strategy revision process or convergence to an equilibrium other than the Nash. To address this problem, we propose the Kullback–Leibler Divergence Regularized Learning (KLD-RL) model, along with an algorithm that iteratively updates the model's regularization parameter across a network of communicating agents. Using passivity-based convergence analysis techniques, we show that the KLD-RL model achieves convergence to the Nash equilibrium without oscillations, even for a class of population games that are subject to time delays. We demonstrate our main results numerically on a two-population congestion game and a two-population zero-sum game.

Original languageEnglish (US)
Pages (from-to)6593-6608
Number of pages16
JournalIEEE Transactions on Automatic Control
Volume70
Issue number10
DOIs
StatePublished - 2025

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Keywords

  • Decision making
  • evolutionary dynamics
  • game theory
  • multi-agent systems
  • nonlinear systems

Fingerprint

Dive into the research topics of 'Learning With Delayed Payoffs in Population Games Using Kullback–Leibler Divergence Regularization'. Together they form a unique fingerprint.

Cite this