TY - JOUR
T1 - POLICY MIRROR DESCENT FOR REGULARIZED REINFORCEMENT LEARNING
T2 - A GENERALIZED FRAMEWORK WITH LINEAR CONVERGENCE
AU - Zhan, Wenhao
AU - Cen, Shicong
AU - Huang, Baihe
AU - Chen, Yuxin
AU - Lee, Jason D.
AU - Chi, Yuejie
N1 - Publisher Copyright:
Copyright © by SIAM.
PY - 2023
Y1 - 2023
N2 - Policy optimization, which learns the policy of interest by maximizing the value function via large-scale optimization techniques, lies at the heart of modern reinforcement learning (RL). In addition to value maximization, other practical considerations arise commonly as well, including the need of encouraging exploration, and that of ensuring certain structural properties of the learned policy due to safety, resource, and operational constraints. These considerations can often be accounted for by resorting to regularized RL, which augments the target value function with a structure-promoting regularization term. Focusing on an infinite-horizon discounted tabular Markov decision process, this paper proposes a generalized policy mirror descent (GPMD) algorithm for solving regularized RL. As a generalization of policy mirror descent [G. Lan, Math. Program., 198 (2023), pp. 1059-1106], the proposed algorithm accommodates a general class of convex regularizers as well as a broad family of Bregman divergence in cognizance of the regularizer in use. We demonstrate that our algorithm converges linearly to the global solution over an entire range of learning rates, in a dimension-free fashion, even when the regularizer lacks strong convexity and smoothness. In addition, this linear convergence feature is provably stable in the face of inexact policy evaluation and imperfect policy updates. Numerical experiments are provided to corroborate the applicability and appealing performance of GPMD.
AB - Policy optimization, which learns the policy of interest by maximizing the value function via large-scale optimization techniques, lies at the heart of modern reinforcement learning (RL). In addition to value maximization, other practical considerations arise commonly as well, including the need of encouraging exploration, and that of ensuring certain structural properties of the learned policy due to safety, resource, and operational constraints. These considerations can often be accounted for by resorting to regularized RL, which augments the target value function with a structure-promoting regularization term. Focusing on an infinite-horizon discounted tabular Markov decision process, this paper proposes a generalized policy mirror descent (GPMD) algorithm for solving regularized RL. As a generalization of policy mirror descent [G. Lan, Math. Program., 198 (2023), pp. 1059-1106], the proposed algorithm accommodates a general class of convex regularizers as well as a broad family of Bregman divergence in cognizance of the regularizer in use. We demonstrate that our algorithm converges linearly to the global solution over an entire range of learning rates, in a dimension-free fashion, even when the regularizer lacks strong convexity and smoothness. In addition, this linear convergence feature is provably stable in the face of inexact policy evaluation and imperfect policy updates. Numerical experiments are provided to corroborate the applicability and appealing performance of GPMD.
KW - Bregman divergence
KW - policy mirror descent
KW - policy optimization
KW - regularization
UR - http://www.scopus.com/inward/record.url?scp=85147668657&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85147668657&partnerID=8YFLogxK
U2 - 10.1137/21M1456789
DO - 10.1137/21M1456789
M3 - Article
AN - SCOPUS:85147668657
SN - 1052-6234
VL - 33
SP - 1061
EP - 1091
JO - SIAM Journal on Optimization
JF - SIAM Journal on Optimization
IS - 2
ER -