TY - GEN
T1 - Offline Stochastic Shortest Path
T2 - 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022
AU - Yin, Ming
AU - Chen, Wenjing
AU - Wang, Mengdi
AU - Wang, Yu Xiang
N1 - Publisher Copyright:
© 2022 Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022. All right reserved.
PY - 2022
Y1 - 2022
N2 - Goal-oriented Reinforcement Learning, where the agent needs to reach the goal state while simultaneously minimizing the cost, has received significant attention in real-world applications. Its theoretical formulation, stochastic shortest path (SSP), has been intensively researched in the online setting. Nevertheless, it remains understudied when such an online interaction is prohibited and only historical data is provided. In this paper, we consider the offline stochastic shortest path problem when the state space and the action space are finite. We design the simple value iterationbased algorithms for tackling both offline policy evaluation (OPE) and offline policy learning tasks. Notably, our analysis of these simple algorithms yields strong instance-dependent bounds which can imply worst-case bounds that are near-minimax optimal. We hope our study could help illuminate the fundamental statistical limits of the offline SSP problem and motivate further studies beyond the scope of current consideration.
AB - Goal-oriented Reinforcement Learning, where the agent needs to reach the goal state while simultaneously minimizing the cost, has received significant attention in real-world applications. Its theoretical formulation, stochastic shortest path (SSP), has been intensively researched in the online setting. Nevertheless, it remains understudied when such an online interaction is prohibited and only historical data is provided. In this paper, we consider the offline stochastic shortest path problem when the state space and the action space are finite. We design the simple value iterationbased algorithms for tackling both offline policy evaluation (OPE) and offline policy learning tasks. Notably, our analysis of these simple algorithms yields strong instance-dependent bounds which can imply worst-case bounds that are near-minimax optimal. We hope our study could help illuminate the fundamental statistical limits of the offline SSP problem and motivate further studies beyond the scope of current consideration.
UR - http://www.scopus.com/inward/record.url?scp=85136330909&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85136330909&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85136330909
T3 - Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022
SP - 2278
EP - 2288
BT - Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022
PB - Association For Uncertainty in Artificial Intelligence (AUAI)
Y2 - 1 August 2022 through 5 August 2022
ER -