Learning in Intelligent Embedded Systems

Daniel D. Lee, H. Sebastian Seung

Research output: Contribution to conferencePaperpeer-review

6 Scopus citations

Abstract

Information processing capabilities of embedded systems presently lack the robustness and rich complexity found in biological systems. Endowing artificial systems with the ability to adapt to changing conditions requires algorithms that can rapidly learn from examples. We demonstrate the application of one such learning algorithm on an inexpensive robot constructed to perform simple sensorimotor tasks. The robot learns to track a particular object by discovering the salient visual and auditory cues unique to that object. The system uses a convolutional neural network to combine color, luminance, motion, and auditory information. The weights of the networks are adjusted using feedback from a teacher to reflect the reliability of the various input channels in the surrounding environment. We also discuss how unsupervised learning can discover features in data without external interaction. An unsupervised algorithm based upon nonnegative matrix factorization is able to automatically learn the different parts of objects. Such a parts-based representation of data is crucial for robust object recognition.

Original languageEnglish (US)
StatePublished - 1999
Externally publishedYes
EventUSENIX Workshop on Embedded Systems 1999 - Cambridge, United States
Duration: Mar 29 1999Mar 31 1999

Conference

ConferenceUSENIX Workshop on Embedded Systems 1999
CountryUnited States
CityCambridge
Period3/29/993/31/99

All Science Journal Classification (ASJC) codes

  • Software
  • Electrical and Electronic Engineering
  • Hardware and Architecture

Fingerprint Dive into the research topics of 'Learning in Intelligent Embedded Systems'. Together they form a unique fingerprint.

Cite this