TY - GEN
T1 - Sample-optimal parametric Q-learning using linearly additive features
AU - Yang, Lin F.
AU - Wang, Mengdi
PY - 2019/1/1
Y1 - 2019/1/1
N2 - Consider a Markov decision process (MDP) that admits a set of state-action features, which can linearly express the process's probabilistic transition model. We propose a parametric Q-learning algorithm that finds an approximate-optimal policy using a sample size proportional to the feature dimension K and invariant with respect to the size of the state space. To further improve its sample efficiency, we exploit the monotonicity property and intrinsic noise structure of the Bellman operator, provided the existence of anchor state-actions that imply implicit non-negativity in the feature space. We augment the algorithm using techniques of variance reduction, monotonicity preservation, and confidence bounds. It is proved to find a policy which is e-optimal from any initial state with high probability using Õ(K/ 2 (1 - γ) 3 ) sample transitions for arbitrarily large-scale MDP with a discount factor γ (0,1). A matching information-theoretical lower bound is proved, confirming the sample optimality of the proposed method with respect to all parameters (up to poly-log factors).
AB - Consider a Markov decision process (MDP) that admits a set of state-action features, which can linearly express the process's probabilistic transition model. We propose a parametric Q-learning algorithm that finds an approximate-optimal policy using a sample size proportional to the feature dimension K and invariant with respect to the size of the state space. To further improve its sample efficiency, we exploit the monotonicity property and intrinsic noise structure of the Bellman operator, provided the existence of anchor state-actions that imply implicit non-negativity in the feature space. We augment the algorithm using techniques of variance reduction, monotonicity preservation, and confidence bounds. It is proved to find a policy which is e-optimal from any initial state with high probability using Õ(K/ 2 (1 - γ) 3 ) sample transitions for arbitrarily large-scale MDP with a discount factor γ (0,1). A matching information-theoretical lower bound is proved, confirming the sample optimality of the proposed method with respect to all parameters (up to poly-log factors).
UR - http://www.scopus.com/inward/record.url?scp=85078226346&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85078226346&partnerID=8YFLogxK
M3 - Conference contribution
T3 - 36th International Conference on Machine Learning, ICML 2019
SP - 12095
EP - 12114
BT - 36th International Conference on Machine Learning, ICML 2019
PB - International Machine Learning Society (IMLS)
T2 - 36th International Conference on Machine Learning, ICML 2019
Y2 - 9 June 2019 through 15 June 2019
ER -