KL Divergence Regularized Learning Model for Multi-Agent Decision Making

Shinkyu Park, Naomi Ehrich Leonard

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

This paper investigates a multi-agent decision making model in large population games. We consider a population of agents that select strategies of interaction with one another. Agents repeatedly revise their strategy choices using revisions defined by the decision-making model. We examine the scenario in which the agents' strategy revision is subject to time delay. This is specified in the problem formulation by requiring the decision-making model to depend on delayed information of the agents' strategy choices. The main goal of this work is to find a multi-agent decision-making model under which the agents' strategy revision converges to equilibrium states, which in our population game formalism coincide with the Nash equilibrium set of underlying games. As key contributions, we propose a new decision-making model called the Kullback-Leibler (KL) divergence regularized learning model, and we establish stability of the Nash equilibrium set under the new model. Using a numerical example and simulations, we illustrate strong convergence properties of our new model.

Original languageEnglish (US)
Title of host publication2021 American Control Conference, ACC 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4509-4514
Number of pages6
ISBN (Electronic)9781665441971
DOIs
StatePublished - May 25 2021
Event2021 American Control Conference, ACC 2021 - Virtual, New Orleans, United States
Duration: May 25 2021May 28 2021

Publication series

NameProceedings of the American Control Conference
Volume2021-May
ISSN (Print)0743-1619

Conference

Conference2021 American Control Conference, ACC 2021
Country/TerritoryUnited States
CityVirtual, New Orleans
Period5/25/215/28/21

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'KL Divergence Regularized Learning Model for Multi-Agent Decision Making'. Together they form a unique fingerprint.

Cite this