Bayesian Surprise Predicts Human Event Segmentation in Story Listening

Manoj Kumar, Ariel Goldstein, Sebastian Michelmann, Jeffrey M. Zacks, Uri Hasson, Kenneth A. Norman

Research output: Contribution to journalArticlepeer-review

Abstract

Event segmentation theory posits that people segment continuous experience into discrete events and that event boundaries occur when there are large transient increases in prediction error. Here, we set out to test this theory in the context of story listening, by using a deep learning language model (GPT-2) to compute the predicted probability distribution of the next word, at each point in the story. For three stories, we used the probability distributions generated by GPT-2 to compute the time series of prediction error. We also asked participants to listen to these stories while marking event boundaries. We used regression models to relate the GPT-2 measures to the human segmentation data. We found that event boundaries are associated with transient increases in Bayesian surprise but not with a simpler measure of prediction error (surprisal) that tracks, for each word in the story, how strongly that word was predicted at the previous time point. These results support the hypothesis that prediction error serves as a control mechanism governing event segmentation and point to important differences between operational definitions of prediction error.

Original languageEnglish (US)
Article numbere13343
JournalCognitive science
Volume47
Issue number10
DOIs
StatePublished - Oct 2023

All Science Journal Classification (ASJC) codes

  • Experimental and Cognitive Psychology
  • Cognitive Neuroscience
  • Artificial Intelligence

Keywords

  • Bayesian surprise
  • Entropy
  • Event segmentation
  • GPT-2
  • Narratives
  • Surprise

Fingerprint

Dive into the research topics of 'Bayesian Surprise Predicts Human Event Segmentation in Story Listening'. Together they form a unique fingerprint.

Cite this