TY - GEN
T1 - TDprop
T2 - 20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021
AU - Romoff, Joshua
AU - Henderson, Peter
AU - Kanaa, David
AU - Bengio, Emmanuel
AU - Touati, Ahmed
AU - Bacon, Pierre Luc
AU - Pineau, Joelle
N1 - Publisher Copyright:
© 2021 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
PY - 2021
Y1 - 2021
N2 - We investigate whether Jacobi preconditioning, accounting for the bootstrap term in temporal difference (TD) learning, can help boost performance of adaptive optimizers. Our method, TDprop, computes a per-parameter learning rate based on the diagonal preconditioning of the TD update rule. We show how this can be used in both n-step returns and TD(?). Our theoretical findings demonstrate that including this additional preconditioning information is comparable to normal semi-gradient TD if the optimal learning rate is found for both via a hyperparameter search. This matches our experimental results. In Deep RL experiments using Expected SARSA, TDprop meets or exceeds the performance of Adam in all tested games under near-optimal learning rates, but a well-tuned SGD can yield similar performance in most settings. Our findings suggest that Jacobi preconditioning may improve upon Adam in Deep RL, but despite incorporating additional information from the TD bootstrap term, may not always be better than SGD. Moreover, they suggest that more theoretical investigations are needed to understand adaptive optimizers under optimal hyperparameter regimes in TD learning: simpler methods may, surprisingly, be theoretically comparable after a hyperparameter search.
AB - We investigate whether Jacobi preconditioning, accounting for the bootstrap term in temporal difference (TD) learning, can help boost performance of adaptive optimizers. Our method, TDprop, computes a per-parameter learning rate based on the diagonal preconditioning of the TD update rule. We show how this can be used in both n-step returns and TD(?). Our theoretical findings demonstrate that including this additional preconditioning information is comparable to normal semi-gradient TD if the optimal learning rate is found for both via a hyperparameter search. This matches our experimental results. In Deep RL experiments using Expected SARSA, TDprop meets or exceeds the performance of Adam in all tested games under near-optimal learning rates, but a well-tuned SGD can yield similar performance in most settings. Our findings suggest that Jacobi preconditioning may improve upon Adam in Deep RL, but despite incorporating additional information from the TD bootstrap term, may not always be better than SGD. Moreover, they suggest that more theoretical investigations are needed to understand adaptive optimizers under optimal hyperparameter regimes in TD learning: simpler methods may, surprisingly, be theoretically comparable after a hyperparameter search.
KW - Adaptive optimization
KW - Deep learning
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85112407423&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112407423&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85112407423
T3 - Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
SP - 1070
EP - 1078
BT - 20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021
PB - International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Y2 - 3 May 2021 through 7 May 2021
ER -