Imaging valuation models in human choice

P. Read Montague, Brooks King-Casas, Jonathan D. Cohen

Research output: Chapter in Book/Report/Conference proceedingChapter

248 Scopus citations

Abstract

To make a decision, a system must assign value to each of its available choices. In the human brain, one approach to studying valuation has used rewarding stimuli to map out brain responses by varying the dimension or importance of the rewards. However, theoretical models have taught us that value computations are complex, and so reward probes alone can give only partial information about neural responses related to valuation. In recent years, computationally principled models of value learning have been used in conjunction with noninvasive neuroimaging to tease out neural valuation responses related to reward-learning and decision-making. We restrict our review to the role of these models in a new generation of experiments that seeks to build on a now-large body of diverse reward-related brain responses. We show that the models and the measurements based on them point the way forward in two important directions: the valuation of time and the valuation of fictive experience.

Original languageEnglish (US)
Title of host publicationAnnual Review of Neuroscience
EditorsSteven Hyman, Thomas Jessell, Charles Stevens
Pages417-448
Number of pages32
DOIs
StatePublished - 2006

Publication series

NameAnnual Review of Neuroscience
Volume29
ISSN (Print)0147-006X

All Science Journal Classification (ASJC) codes

  • Neuroscience(all)

Keywords

  • Dopamine
  • Fictive learning signal
  • Reinforcement learning
  • Reward
  • Ventral striatum

Fingerprint Dive into the research topics of 'Imaging valuation models in human choice'. Together they form a unique fingerprint.

Cite this