Abstract
How does the visual system recognize images of a novel object after a single observation despite possible variations in the viewpoint of that object relative to the observer? One possibility is comparing the image with a prototype for invariance over a relevant transformation set (e.g., translations and dilations). However, invariance over rotations (i.e., orientation invariance) has proven difficult to analyze, because it applies to some objects but not others. We propose that the invariant transformations of an object are learned by incorporating prior expectations with real-world evidence. We test this proposal by developing an ideal learner model for learning invariance that predicts better learning of orientation dependence when prior expectations about orientation are weak. This prediction was supported in two behavioral experiments, where participants learned the orientation dependence of novel images using feedback from solving arithmetic problems.
Original language | English (US) |
---|---|
Pages (from-to) | 1183-1201 |
Number of pages | 19 |
Journal | Cognitive science |
Volume | 41 |
DOIs | |
State | Published - May 2017 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Experimental and Cognitive Psychology
- Artificial Intelligence
- Cognitive Neuroscience
Keywords
- Bayesian modeling
- Ideal learner modeling
- Invariance
- Object recognition
- Representation
- Shape recognition