Learning to Be (In)variant: Combining Prior Knowledge and Experience to Infer Orientation Invariance in Object Recognition

Joseph L. Austerweil, Thomas L. Griffiths, Stephen E. Palmer

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

How does the visual system recognize images of a novel object after a single observation despite possible variations in the viewpoint of that object relative to the observer? One possibility is comparing the image with a prototype for invariance over a relevant transformation set (e.g., translations and dilations). However, invariance over rotations (i.e., orientation invariance) has proven difficult to analyze, because it applies to some objects but not others. We propose that the invariant transformations of an object are learned by incorporating prior expectations with real-world evidence. We test this proposal by developing an ideal learner model for learning invariance that predicts better learning of orientation dependence when prior expectations about orientation are weak. This prediction was supported in two behavioral experiments, where participants learned the orientation dependence of novel images using feedback from solving arithmetic problems.

Original languageEnglish (US)
Pages (from-to)1183-1201
Number of pages19
JournalCognitive science
Volume41
DOIs
StatePublished - May 2017
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Experimental and Cognitive Psychology
  • Artificial Intelligence
  • Cognitive Neuroscience

Keywords

  • Bayesian modeling
  • Ideal learner modeling
  • Invariance
  • Object recognition
  • Representation
  • Shape recognition

Fingerprint

Dive into the research topics of 'Learning to Be (In)variant: Combining Prior Knowledge and Experience to Infer Orientation Invariance in Object Recognition'. Together they form a unique fingerprint.

Cite this