Learning curves for stochastic gradient descent in linear feedforward networks

Justin Werfel, Xiaohui Xie, H. Sebastian Seung

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Scopus citations

Abstract

Gradient-following learning methods can encounter problems of implementation in many applications, and stochastic variants are frequently used to overcome these difficulties. We derive quantitative learning curves for three online training methods used with a linear perceptron: direct gradient descent, node perturbation, and weight perturbation. The maximum learning rate for the stochastic methods scales inversely with the first power of the dimensionality of the noise injected into the system; with sufficiently small learning rate, all three methods give identical learning curves. These results suggest guidelines for when these stochastic methods will be limited in their utility, and considerations for architectures in which they will be effective.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 16 - Proceedings of the 2003 Conference, NIPS 2003
PublisherNeural information processing systems foundation
ISBN (Print)0262201526, 9780262201520
StatePublished - 2004
Externally publishedYes
Event17th Annual Conference on Neural Information Processing Systems, NIPS 2003 - Vancouver, BC, Canada
Duration: Dec 8 2003Dec 13 2003

Publication series

NameAdvances in Neural Information Processing Systems
ISSN (Print)1049-5258

Other

Other17th Annual Conference on Neural Information Processing Systems, NIPS 2003
Country/TerritoryCanada
CityVancouver, BC
Period12/8/0312/13/03

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Learning curves for stochastic gradient descent in linear feedforward networks'. Together they form a unique fingerprint.

Cite this