OPTIMAL POLICY EVALUATION USING KERNEL-BASED TEMPORAL DIFFERENCE METHODS

Yaqi Duan, Mengdi Wang, Martin J. Wainwright

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

We study nonparametric methods for estimating the value function of an infinite-horizon discounted Markov reward process (MRP). We analyze the kernel-based least-squares temporal difference (LSTD) estimate, which can be understood either as a nonparametric instrumental variables method, or as a projected approximation to the Bellman fixed point equation. Our analysis imposes no assumptions on the transition operator of the Markov chain, but rather only conditions on the reward function and population-level kernel LSTD solutions. Using empirical process theory and concentration inequalities, we establish a nonasymptotic upper bound on the error with explicit dependence on the effective horizon H = (1 − γ)−1 of the Markov reward process, the eigenvalues of the associated kernel operator, as well as the instance-dependent variance of the Bellman residual error. In addition, we prove minimax lower bounds over subclasses of MRPs, which shows that our guarantees are optimal in terms of the sample size n and the effective horizon H. Whereas existing worst-case theory predicts cubic scaling (H3) in the effective horizon, our theory reveals a much wider range of scalings, depending on the kernel, the stationary distribution, and the variance of the Bellman residual error. Notably, it is only parametric and near-parametric problems that can ever achieve the worst-case cubic scaling.

Original languageEnglish (US)
Pages (from-to)1927-1952
Number of pages26
JournalAnnals of Statistics
Volume52
Issue number5
DOIs
StatePublished - Oct 2024

All Science Journal Classification (ASJC) codes

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Keywords

  • dynamic programming
  • Markov reward process
  • nonparametric estimation
  • policy evaluation
  • reinforcement learning
  • reproducing kernel Hilbert space
  • Sequential decision-making
  • temporal difference learning

Fingerprint

Dive into the research topics of 'OPTIMAL POLICY EVALUATION USING KERNEL-BASED TEMPORAL DIFFERENCE METHODS'. Together they form a unique fingerprint.

Cite this