Generalization of value in reinforcement learning by humans

G. Elliott Wimmer, Nathaniel D. Daw, Daphna Shohamy

Research output: Contribution to journalArticlepeer-review

80 Scopus citations

Abstract

Research in decision-making has focused on the role of dopamine and its striatal targets in guiding choices via learned stimulus-reward or stimulus-response associations, behavior that is well described by reinforcement learning theories. However, basic reinforcement learning is relatively limited in scope and does not explain how learning about stimulus regularities or relations may guide decision-making. A candidate mechanism for this type of learning comes from the domain of memory, which has highlighted a role for the hippocampus in learning of stimulus-stimulus relations, typically dissociated from the role of the striatum in stimulus-response learning. Here, we used functional magnetic resonance imaging and computational model-based analyses to examine the joint contributions of these mechanisms to reinforcement learning. Humans performed a reinforcement learning task with added relational structure, modeled after tasks used to isolate hippocampal contributions to memory. On each trial participants chose one of four options, but the reward probabilities for pairs of options were correlated across trials. This (uninstructed) relationship between pairs of options potentially enabled an observer to learn about option values based on experience with the other options and to generalize across them. We observed blood oxygen level-dependent (BOLD) activity related to learning in the striatum and also in the hippocampus. By comparing a basic reinforcement learning model to one augmented to allow feedback to generalize between correlated options, we tested whether choice behavior and BOLD activity were influenced by the opportunity to generalize across correlated options. Although such generalization goes beyond standard computational accounts of reinforcement learning and striatal BOLD, both choices and striatal BOLD activity were better explained by the augmented model. Consistent with the hypothesized role for the hippocampus in this generalization, functional connectivity between the ventral striatum and hippocampus was modulated, across participants, by the ability of the augmented model to capture participants' choice. Our results thus point toward an interactive model in which striatal reinforcement learning systems may employ relational representations typically associated with the hippocampus.

Original languageEnglish (US)
Pages (from-to)1092-1104
Number of pages13
JournalEuropean Journal of Neuroscience
Volume35
Issue number7
DOIs
StatePublished - Apr 2012
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • General Neuroscience

Keywords

  • Computational model
  • Hippocampus
  • Memory
  • Reward
  • Ventral striatum

Fingerprint

Dive into the research topics of 'Generalization of value in reinforcement learning by humans'. Together they form a unique fingerprint.

Cite this