Local and global convergence of on-line learning

N. Barkai, H. S. Seung, H. Sompolinsky

Research output: Contribution to journalArticle

21 Scopus citations

Abstract

We study the performance of a generalized perceptron algorithm for learning realizable dichotomies, with an error-dependent adaptive learning rate. The asymptotic scaling form of the solution to the associated Markov equations is derived, assuming certain smoothness conditions. We show that the system converges to the optimal solution and the generalization error asymptotically obeys a universal inverse power law in the number of examples. The system is capable of escaping from local minima and adapts rapidly to shifts in the target function. The general theory is illustrated for the perceptron and committee machine.

Original languageEnglish (US)
Pages (from-to)1415-1418
Number of pages4
JournalPhysical review letters
Volume75
Issue number7
DOIs
StatePublished - Jan 1 1995
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Physics and Astronomy(all)

Fingerprint Dive into the research topics of 'Local and global convergence of on-line learning'. Together they form a unique fingerprint.

  • Cite this