TY - GEN
T1 - Variance reduced value iteration and faster algorithms for solving Markov decision processes
AU - Sidford, Aaron
AU - Wang, Mengdi
AU - Wu, Xian
AU - Ye, Yinyu
N1 - Publisher Copyright:
© Copyright 2018 by SIAM.
PY - 2018
Y1 - 2018
N2 - In this paper we provide faster algorithms for approximately solving discounted Markov Decision Processes in multiple parameter regimes. Given a discounted Markov Decision Process (DMDP) with |S| states, |A| actions, discount factor γ ϵ (0; 1), and rewards in the range [-M;M], we show how to compute an ϵ-optimal policy, with probability 1 -δ in time1 Õ(( |S|2|A| + |S||A|/(1 -γ )3) log (M/ϵ) log (1/δ)) : This contribution reects the first nearly linear time, nearly linearly convergent algorithm for solving DMDP's for intermediate values of. We also show how to obtain improved sublinear time algorithms and provide an algorithm which computes an ϵ-optimal policy with probability 1-δ in time Õ (|S||A|M2/(1-γ)4ϵ2 log ( 1/δ)) provided we can sample from the transition function in O(1) time. Interestingly, we obtain our results by a careful modification of approximate value iteration. We show how to combine classic approximate value iteration analysis with new techniques in variance reduction. Our fastest algorithms leverage further insights to ensure that our algorithms make monotonic progress towards the optimal value. This paper is one of few instances in using sampling to obtain a linearly convergent linear programming algorithm and we hope that the analysis may be useful more broadly.
AB - In this paper we provide faster algorithms for approximately solving discounted Markov Decision Processes in multiple parameter regimes. Given a discounted Markov Decision Process (DMDP) with |S| states, |A| actions, discount factor γ ϵ (0; 1), and rewards in the range [-M;M], we show how to compute an ϵ-optimal policy, with probability 1 -δ in time1 Õ(( |S|2|A| + |S||A|/(1 -γ )3) log (M/ϵ) log (1/δ)) : This contribution reects the first nearly linear time, nearly linearly convergent algorithm for solving DMDP's for intermediate values of. We also show how to obtain improved sublinear time algorithms and provide an algorithm which computes an ϵ-optimal policy with probability 1-δ in time Õ (|S||A|M2/(1-γ)4ϵ2 log ( 1/δ)) provided we can sample from the transition function in O(1) time. Interestingly, we obtain our results by a careful modification of approximate value iteration. We show how to combine classic approximate value iteration analysis with new techniques in variance reduction. Our fastest algorithms leverage further insights to ensure that our algorithms make monotonic progress towards the optimal value. This paper is one of few instances in using sampling to obtain a linearly convergent linear programming algorithm and we hope that the analysis may be useful more broadly.
UR - http://www.scopus.com/inward/record.url?scp=85045567671&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85045567671&partnerID=8YFLogxK
U2 - 10.1137/1.9781611975031.50
DO - 10.1137/1.9781611975031.50
M3 - Conference contribution
AN - SCOPUS:85045567671
T3 - Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms
SP - 770
EP - 787
BT - 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018
A2 - Czumaj, Artur
PB - Association for Computing Machinery
T2 - 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018
Y2 - 7 January 2018 through 10 January 2018
ER -