Abstract
A central challenge for cognitive science is to explain how abstract concepts are acquired from limited experience. This has often been framed in terms of a dichotomy between connectionist and symbolic cognitive models. Here, we highlight a recently emerging line of work that suggests a novel reconciliation of these approaches, by exploiting an inductive bias that we term the relational bottleneck. In that approach, neural networks are constrained via their architecture to focus on relations between perceptual inputs, rather than the attributes of individual inputs. We review a family of models that employ this approach to induce abstractions in a data-efficient manner, emphasizing their potential as candidate models for the acquisition of abstract concepts in the human mind and brain.
Original language | English (US) |
---|---|
Pages (from-to) | 829-843 |
Number of pages | 15 |
Journal | Trends in Cognitive Sciences |
Volume | 28 |
Issue number | 9 |
DOIs | |
State | Published - Sep 2024 |
All Science Journal Classification (ASJC) codes
- Neuropsychology and Physiological Psychology
- Experimental and Cognitive Psychology
- Cognitive Neuroscience
Keywords
- abstraction
- inductive biases
- neural networks
- relations
- symbol processing