TY - JOUR
T1 - Grounding language for transfer in deep reinforcement learning
AU - Narasimhan, Karthik
AU - Barzilay, Regina
AU - Jaakkola, Tommi
N1 - Funding Information:
This work was done while Karthik Narasimhan was affiliated with MIT. We thank Adam Fisch, Victor Quach, and members of the MIT NLP group for their comments on earlier drafts of this paper.
PY - 2018/12
Y1 - 2018/12
N2 - In this paper, we explore the utilization of natural language to drive transfer for reinforcement learning (RL). Despite the wide-spread application of deep RL techniques, learning generalized policy representations that work across domains remains a challenging problem. We demonstrate that textual descriptions of environments provide a compact intermediate channel to facilitate effective policy transfer. Specifically, by learning to ground the meaning of text to the dynamics of the environment such as transitions and rewards, an autonomous agent can effectively bootstrap policy learning on a new domain given its description. We employ a model-based RL approach consisting of a differentiable planning module, a model-free component and a factorized state representation to effectively use entity descriptions. Our model outperforms prior work on both transfer and multi-task scenarios in a variety of different environments. For instance, we achieve up to 14% and 11.5% absolute improvement over previously existing models in terms of average and initial rewards, respectively.
AB - In this paper, we explore the utilization of natural language to drive transfer for reinforcement learning (RL). Despite the wide-spread application of deep RL techniques, learning generalized policy representations that work across domains remains a challenging problem. We demonstrate that textual descriptions of environments provide a compact intermediate channel to facilitate effective policy transfer. Specifically, by learning to ground the meaning of text to the dynamics of the environment such as transitions and rewards, an autonomous agent can effectively bootstrap policy learning on a new domain given its description. We employ a model-based RL approach consisting of a differentiable planning module, a model-free component and a factorized state representation to effectively use entity descriptions. Our model outperforms prior work on both transfer and multi-task scenarios in a variety of different environments. For instance, we achieve up to 14% and 11.5% absolute improvement over previously existing models in terms of average and initial rewards, respectively.
UR - http://www.scopus.com/inward/record.url?scp=85061290142&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85061290142&partnerID=8YFLogxK
U2 - 10.1613/jair.1.11263
DO - 10.1613/jair.1.11263
M3 - Article
AN - SCOPUS:85061290142
VL - 63
SP - 849
EP - 874
JO - Journal of Artificial Intelligence Research
JF - Journal of Artificial Intelligence Research
SN - 1076-9757
ER -