Abstract
Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.
Original language | English (US) |
---|---|
Article number | e1004237 |
Journal | PLoS computational biology |
Volume | 11 |
Issue number | 6 |
DOIs | |
State | Published - Jun 18 2015 |
All Science Journal Classification (ASJC) codes
- Genetics
- Ecology, Evolution, Behavior and Systematics
- Cellular and Molecular Neuroscience
- Molecular Biology
- Ecology
- Computational Theory and Mathematics
- Modeling and Simulation