Offline replay supports planning in human reinforcement learning

Ida Momennejad, A. Ross Otto, Nathaniel D. Daw, Kenneth A. Norman

Research output: Contribution to journalArticlepeer-review

71 Scopus citations


Making decisions in sequentially structured tasks requires integrating distally acquired information. The extensive computational cost of such integration challenges planning methods that integrate online, at decision time. Furthermore, it remains unclear whether ‘offline’ integration during replay supports planning, and if so which memories should be replayed. Inspired by machine learning, we propose that (a) offline replay of trajectories facilitates integrating representations that guide decisions, and (b) unsigned prediction errors (uncertainty) trigger such integrative replay. We designed a 2-step revaluation task for fMRI, whereby participants needed to integrate changes in rewards with past knowledge to optimally replan decisions. As predicted, we found that (a) multi-voxel pattern evidence for off-task replay predicts subsequent replanning; (b) neural sensitivity to uncertainty predicts subsequent replay and replanning; (c) off-task hippocampus and anterior cingulate activity increase when revaluation is required. These findings elucidate how the brain leverages offline mechanisms in planning and goal-directed behavior under uncertainty.

Original languageEnglish (US)
Article numbere32548
StatePublished - Dec 1 2018

All Science Journal Classification (ASJC) codes

  • General Immunology and Microbiology
  • General Biochemistry, Genetics and Molecular Biology
  • General Neuroscience


Dive into the research topics of 'Offline replay supports planning in human reinforcement learning'. Together they form a unique fingerprint.

Cite this