Learning from other minds: an optimistic critique of reinforcement learning models of social learning

Natalia Vélez, Hyowon Gweon

Research output: Contribution to journalReview articlepeer-review

33 Scopus citations

Abstract

Reinforcement learning models have been productively applied to identify neural correlates of the value of social information. However, by operationalizing social information as a lean, reward-predictive cue, this literature underestimates the richness of human social learning: Humans readily go beyond action-outcome mappings and can draw flexible inferences from a single observation. We argue that computational models of social learning need minds, that is, a generative model of how others’ unobservable mental states cause their observable actions. Recent advances in inferential social learning suggest that even young children learn from others by using an intuitive, generative model of other minds. Bridging developmental, Bayesian, and reinforcement learning perspectives can enrich our understanding of the neural bases of distinctively human social learning.

Original languageEnglish (US)
Pages (from-to)110-115
Number of pages6
JournalCurrent Opinion in Behavioral Sciences
Volume38
DOIs
StatePublished - Apr 2021
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Psychiatry and Mental health
  • Cognitive Neuroscience
  • Behavioral Neuroscience

Fingerprint

Dive into the research topics of 'Learning from other minds: an optimistic critique of reinforcement learning models of social learning'. Together they form a unique fingerprint.

Cite this