Abstract
Computational models are greatly useful in cognitive science in revealing the mechanisms of learning and decision making. However, it is hard to know whether all meaningful variance in behavior has been account for by the best-fit model selected through model comparison. In this work, we propose to use recurrent neural networks (RNNs) to assess the limits of predictability afforded by a model of behavior, and reveal what (if anything) is missing in the cognitive models. We apply this approach in a complex reward-learning task with a large choice space and rich individual variability. The RNN models outperform the best known cognitive model through the entire learning phase. By analyzing and comparing model predictions, we show that the RNN models are more accurate at capturing the temporal dependency between subsequent choices, and better at identifying the subspace in the space of choices where participants’ behavior is more likely to reside. The RNNs can also capture individual differences across participants by utilizing an embedding. The usefulness of this approach suggests promising applications of using RNNs to predict human behavior in complex cognitive tasks, in order to reveal cognitive mechanisms and their variability.
Original language | English (US) |
---|---|
Pages | 1388-1394 |
Number of pages | 7 |
State | Published - 2021 |
Event | 43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021 - Virtual, Online, Austria Duration: Jul 26 2021 → Jul 29 2021 |
Conference
Conference | 43rd Annual Meeting of the Cognitive Science Society: Comparative Cognition: Animal Minds, CogSci 2021 |
---|---|
Country/Territory | Austria |
City | Virtual, Online |
Period | 7/26/21 → 7/29/21 |
All Science Journal Classification (ASJC) codes
- Cognitive Neuroscience
- Artificial Intelligence
- Computer Science Applications
- Human-Computer Interaction
Keywords
- model comparison
- probabilistic reward learning
- recurrent neural network
- sequential decision making