Noise conditions for prespecified convergence rates of stochastic approximation algorithms

Edwin K.P. Chong, I. Jeng Wang, Sanjeev R. Kulkarni

Research output: Contribution to journalArticlepeer-review

10 Scopus citations


We develop deterministic necessary and sufficient conditions on individual noise sequences of a stochastic approximation algorithm for the error of the iterates to converge at a given rate. Specifically, suppose {ρn} is a given positive sequence converging monotonically to zero. Consider a stochastic approximation algorithm xn+1 =xn-an(Anxn - bn) + anen, where {xn} is the iterate sequence, {an}is the step size sequence, {en} is the noise sequence, and x* is the desired zero of the function f(x) = Ax -b. Then, under appropriate assumptions, we show that xn -x* =o(ρn) if and only if the sequence {en} satisfies one of five equivalent conditions. These conditions are based on well-known formulas for noise sequences: Kushner and Clark's condition, Chen's condition, Kulkarni and Horn's condition, a decomposition condition, and a weighted averaging condition. Our necessary and sufficient condition on {en} to achieve a convergence rate of {ρn} is basically that the sequence {enn} satisfies any one of the above five well-known conditions. We provide examples to illustrate our result. In particular, we easily recover the familiar result that if an = a/n and {en} is a martingale difference process with bounded variance, then xn -x* = o(n 1/2(log(n))β) for any β>1/2.

Original languageEnglish (US)
Pages (from-to)810-814
Number of pages5
JournalIEEE Transactions on Information Theory
Issue number2
StatePublished - 1999
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Library and Information Sciences


  • Convergence rate
  • Kiefer-wolfowitz
  • Necessary and sufficient conditions
  • Noise sequences
  • Robbins-monro
  • Stochastic approximation


Dive into the research topics of 'Noise conditions for prespecified convergence rates of stochastic approximation algorithms'. Together they form a unique fingerprint.

Cite this