Skip to main navigation
Skip to search
Skip to main content
Princeton University Home
Help & FAQ
Home
Profiles
Research units
Facilities
Projects
Research output
Press/Media
Search by expertise, name or affiliation
Model-Based Reinforcement Learning for Offline Zero-Sum Markov Games
Yuling Yan
, Gen Li
, Yuxin Chen
,
Jianqing Fan
Operations Research & Financial Engineering
Bendheim Center for Finance
Center for Statistics & Machine Learning
Economics
Princeton Language and Intelligence (PLI)
Research output
:
Contribution to journal
›
Article
›
peer-review
4
Scopus citations
Overview
Fingerprint
Fingerprint
Dive into the research topics of 'Model-Based Reinforcement Learning for Offline Zero-Sum Markov Games'. Together they form a unique fingerprint.
Sort by
Weight
Alphabetically
Keyphrases
Sample Complexity
100%
Model-based Reinforcement Learning
100%
Zero-sum Markov Game
100%
Nash Equilibrium
50%
Markov
50%
Complexity Bounds
50%
Approximate Nash Equilibrium
50%
Targeting Accuracy
50%
Zero-sum
50%
Offline Data
50%
Variance Reduction
50%
Value Iteration
50%
Bernstein
50%
Infinite Horizon
50%
Minimax Optimality
50%
S-states
50%
Distribution Shift
50%
Sample Splitting
50%
Bounds for Zeros
50%
Lower Confidence Bound
50%
Markov Games
50%
Mathematics
Optimality
100%
Nash Equilibrium
100%
Approximates
50%
Minimax
50%
Variance Reduction
50%
Lower Confidence Bound
50%
Computer Science
Nash Equilibrium
100%
Model-Based Reinforcement Learning
100%
Confidence Bound
50%
Variance Reduction
50%