The relational bottleneck as an inductive bias for efficient abstraction

Taylor W. Webb, Steven M. Frankland, Awni Altabaa, Simon Segert, Kamesh Krishnamurthy, Declan Campbell, Jacob Russin, Tyler Giallanza, Randall O'Reilly, John Lafferty, Jonathan D. Cohen

Research output: Contribution to journalReview articlepeer-review

1 Scopus citations

Abstract

A central challenge for cognitive science is to explain how abstract concepts are acquired from limited experience. This has often been framed in terms of a dichotomy between connectionist and symbolic cognitive models. Here, we highlight a recently emerging line of work that suggests a novel reconciliation of these approaches, by exploiting an inductive bias that we term the relational bottleneck. In that approach, neural networks are constrained via their architecture to focus on relations between perceptual inputs, rather than the attributes of individual inputs. We review a family of models that employ this approach to induce abstractions in a data-efficient manner, emphasizing their potential as candidate models for the acquisition of abstract concepts in the human mind and brain.

Original languageEnglish (US)
Pages (from-to)829-843
Number of pages15
JournalTrends in Cognitive Sciences
Volume28
Issue number9
DOIs
StatePublished - Sep 2024

All Science Journal Classification (ASJC) codes

  • Neuropsychology and Physiological Psychology
  • Experimental and Cognitive Psychology
  • Cognitive Neuroscience

Keywords

  • abstraction
  • inductive biases
  • neural networks
  • relations
  • symbol processing

Fingerprint

Dive into the research topics of 'The relational bottleneck as an inductive bias for efficient abstraction'. Together they form a unique fingerprint.

Cite this