Abstract
Representations are a key explanatory device used by cognitive psychologists to account for human behavior. Understanding the effects of context and experience on the representations people use is essential, because if two people encode the same stimulus using different representations, their response to that stimulus may be different. We present a computational framework that can be used to define models that flexibly construct feature representations (where by a feature we mean a part of the image of an object) for a set of observed objects, based on nonparametric Bayesian statistics. Austerweil and Griffiths (2011) presented an initial model constructed in this framework that captures how the distribution of parts affects the features people use to represent a set of objects. We build on this work in three ways. First, although people use features that can be transformed on each observation (e.g., translate on the retinal image), many existing feature learning models can only recognize features that are not transformed (occur identically each time). Consequently, we extend the initial model to infer features that are invariant over a set of transformations, and learn different structures of dependence between feature transformations. Second, we compare two possible methods for capturing the manner that categorization affects feature representations. Finally, we present a model that learns features incrementally, capturing an effect of the order of object presentation on the features people learn. We conclude by considering the implications and limitations of our empirical and theoretical results.
Original language | English (US) |
---|---|
Pages (from-to) | 817-851 |
Number of pages | 35 |
Journal | Psychological Review |
Volume | 120 |
Issue number | 4 |
DOIs | |
State | Published - Oct 2013 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- General Psychology
Keywords
- Bayesian modeling
- Computational constructivism
- Computational modeling
- Feature learning
- Representations