Learning continuous attractors in recurrent networks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

63 Scopus citations

Abstract

One approach to invariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in missing information, the network develops a continuous attractor that models the manifold from which the patterns are drawn. Prom a statistical viewpoint, the pattern completion task allows a formulation of unsupervised learning in terms of regression rather than density estimation.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 10 - Proceedings of the 1997 Conference, NIPS 1997
PublisherNeural information processing systems foundation
Pages654-660
Number of pages7
ISBN (Print)0262100762, 9780262100762
StatePublished - 1998
Externally publishedYes
Event11th Annual Conference on Neural Information Processing Systems, NIPS 1997 - Denver, CO, United States
Duration: Dec 1 1997Dec 6 1997

Publication series

NameAdvances in Neural Information Processing Systems
ISSN (Print)1049-5258

Other

Other11th Annual Conference on Neural Information Processing Systems, NIPS 1997
Country/TerritoryUnited States
CityDenver, CO
Period12/1/9712/6/97

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Learning continuous attractors in recurrent networks'. Together they form a unique fingerprint.

Cite this