Abstract
Compact representations of the environment allow humans to behave efficiently in a complex world. Reinforcement learning models capture many behavioral and neural effects but do not explain recent findings showing that structure in the environment influences learning. In parallel, Bayesian cognitive models predict how humans learn structured knowledge but do not have a clear neurobiological implementation. We propose an integration of these two model classes in which structured knowledge learned via approximate Bayesian inference acts as a source of selective attention. In turn, selective attention biases reinforcement learning towards relevant dimensions of the environment. An understanding of structure learning will help to resolve the fundamental challenge in decision science: explaining why people make the decisions they do.
Original language | English (US) |
---|---|
Pages (from-to) | 278-292 |
Number of pages | 15 |
Journal | Trends in Cognitive Sciences |
Volume | 23 |
Issue number | 4 |
DOIs | |
State | Published - Apr 2019 |
All Science Journal Classification (ASJC) codes
- Neuropsychology and Physiological Psychology
- Experimental and Cognitive Psychology
- Cognitive Neuroscience
Keywords
- Bayesian inference
- approximate inference
- category learning
- corticostriatal circuits
- dopamine
- representation learning
- rule learning
- striatum