Discovering latent causes in reinforcement learning

Samuel J. Gershman, Kenneth A. Norman, Yael Niv

Research output: Contribution to journalReview articlepeer-review

97 Scopus citations

Abstract

Effective reinforcement learning hinges on having an appropriate state representation. But where does this representation come from? We argue that the brain discovers state representations by trying to infer the latent causal structure of the task at hand, and assigning each latent cause to a separate state. In this paper, we review several implications of this latent cause framework, with a focus on Pavlovian conditioning. The framework suggests that conditioning is not the acquisition of associations between cues and outcomes, but rather the acquisition of associations between latent causes and observable stimuli. A latent cause interpretation of conditioning enables us to begin answering questions that have frustrated classical theories: Why do extinguished responses sometimes return? Why do stimuli presented in compound sometimes summate and sometimes do not? Beyond conditioning, the principles of latent causal inference may provide a general theory of structure learning across cognitive domains.

Original languageEnglish (US)
Pages (from-to)43-50
Number of pages8
JournalCurrent Opinion in Behavioral Sciences
Volume5
DOIs
StatePublished - Oct 1 2015

All Science Journal Classification (ASJC) codes

  • Cognitive Neuroscience
  • Psychiatry and Mental health
  • Behavioral Neuroscience

Fingerprint

Dive into the research topics of 'Discovering latent causes in reinforcement learning'. Together they form a unique fingerprint.

Cite this