Learning Good State and Action Representations via Tensor Decomposition

Chengzhuo Ni, Anru R. Zhang, Yaqi Duan, Mengdi Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

The transition kernel of a continuous-state-action Markov decision process (MDP) admits a natural tensor structure. This paper proposes a tensor-inspired unsupervised learning method to identify meaningful low-dimensional state and action representations from empirical trajectories. The method exploits the MDP's tensor structure by kernelization, importance sampling and low-Tucker-rank approximation. This method can be further used to cluster states and actions respectively and find the best discrete MDP abstraction. We provide sharp statistical error bounds for tensor concentration and the preservation of diffusion distance after embedding.

Original languageEnglish (US)
Title of host publication2021 IEEE International Symposium on Information Theory, ISIT 2021 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1682-1687
Number of pages6
ISBN (Electronic)9781538682098
DOIs
StatePublished - Jul 12 2021
Event2021 IEEE International Symposium on Information Theory, ISIT 2021 - Virtual, Melbourne, Australia
Duration: Jul 12 2021Jul 20 2021

Publication series

NameIEEE International Symposium on Information Theory - Proceedings
Volume2021-July
ISSN (Print)2157-8095

Conference

Conference2021 IEEE International Symposium on Information Theory, ISIT 2021
Country/TerritoryAustralia
CityVirtual, Melbourne
Period7/12/217/20/21

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Information Systems
  • Modeling and Simulation
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Learning Good State and Action Representations via Tensor Decomposition'. Together they form a unique fingerprint.

Cite this