TY - JOUR
T1 - A rational model of the effects of distributional information on feature learning
AU - Austerweil, Joseph L.
AU - Griffiths, Thomas L.
N1 - Funding Information:
We thank Rob Goldstone, Stephen Palmer, Karen Schloss, Tania Lombrozo, Greg Murphy, Charles Kemp, Amy Perfors and Eleanor Rosch for insightful discussions, Frank Wood for providing code for the Noisy-OR IBP model, and Brian Tang, David Belford, Shubin Liu, and Julia Ying for help with experiment construction, running participants, and data analysis. A preliminary version of some of the computational results was presented at the 21st Neural Information Processing Society and Experiments 1 and 2 were presented at the 31st Annual Meeting of the Cognitive Science society. This work was supported by Grant No. FA9550-07-1-0351 from the Air Force Office of Scientific Research and Grant No. IIS-0845410 from the National Science Foundation .
PY - 2011/12
Y1 - 2011/12
N2 - Most psychological theories treat the features of objects as being fixed and immediately available to observers. However, novel objects have an infinite array of properties that could potentially be encoded as features, raising the question of how people learn which features to use in representing those objects. We focus on the effects of distributional information on feature learning, considering how a rational agent should use statistical information about the properties of objects in identifying features. Inspired by previous behavioral results on human feature learning, we present an ideal observer model based on nonparametric Bayesian statistics. This model balances the idea that objects have potentially infinitely many features with the goal of using a relatively small number of features to represent any finite set of objects. We then explore the predictions of this ideal observer model. In particular, we investigate whether people are sensitive to how parts co-vary over objects they observe. In a series of four behavioral experiments (three using visual stimuli, one using conceptual stimuli), we demonstrate that people infer different features to represent the same four objects depending on the distribution of parts over the objects they observe. Additionally in all four experiments, the features people infer have consequences for how they generalize properties to novel objects. We also show that simple models that use the raw sensory data as inputs and standard dimensionality reduction techniques (principal component analysis and independent component analysis) are insufficient to explain our results.
AB - Most psychological theories treat the features of objects as being fixed and immediately available to observers. However, novel objects have an infinite array of properties that could potentially be encoded as features, raising the question of how people learn which features to use in representing those objects. We focus on the effects of distributional information on feature learning, considering how a rational agent should use statistical information about the properties of objects in identifying features. Inspired by previous behavioral results on human feature learning, we present an ideal observer model based on nonparametric Bayesian statistics. This model balances the idea that objects have potentially infinitely many features with the goal of using a relatively small number of features to represent any finite set of objects. We then explore the predictions of this ideal observer model. In particular, we investigate whether people are sensitive to how parts co-vary over objects they observe. In a series of four behavioral experiments (three using visual stimuli, one using conceptual stimuli), we demonstrate that people infer different features to represent the same four objects depending on the distribution of parts over the objects they observe. Additionally in all four experiments, the features people infer have consequences for how they generalize properties to novel objects. We also show that simple models that use the raw sensory data as inputs and standard dimensionality reduction techniques (principal component analysis and independent component analysis) are insufficient to explain our results.
KW - Bayesian modeling
KW - Features
KW - Nonparametric Bayesian statistics
KW - Rational analysis
KW - Representational change
UR - http://www.scopus.com/inward/record.url?scp=80052852204&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=80052852204&partnerID=8YFLogxK
U2 - 10.1016/j.cogpsych.2011.08.002
DO - 10.1016/j.cogpsych.2011.08.002
M3 - Article
C2 - 21937008
AN - SCOPUS:80052852204
SN - 0010-0285
VL - 63
SP - 173
EP - 209
JO - Cognitive Psychology
JF - Cognitive Psychology
IS - 4
ER -