Learning algorithms for separable approximations of discrete stochastic optimization problems

Warren Buckler Powell, Andrzej Ruszczyński, Huseyin Topaloglu

Research output: Contribution to journalArticlepeer-review

74 Scopus citations


We propose the use of sequences of separable, piecewise linear approximations for solving nondifferentiable stochastic optimization problems. The approximations are constructed adaptively using a combination of stochastic subgradient information and possibly sample information on the objective function itself. We prove the convergence of several versions of such methods when the objective function is separable and has integer break points, and we illustrate their behavior on numerical examples. We then demonstrate the performance on nonseparable problems that arise in the context of two-stage stochastic programming problems, and demonstrate that these techniques provide near-optimal solutions with a very fast rate of convergence compared with other solution techniques.

Original languageEnglish (US)
Pages (from-to)814-836
Number of pages23
JournalMathematics of Operations Research
Issue number4
StatePublished - Nov 2004

All Science Journal Classification (ASJC) codes

  • General Mathematics
  • Computer Science Applications
  • Management Science and Operations Research


  • Dynamic programming
  • Stochastic programming


Dive into the research topics of 'Learning algorithms for separable approximations of discrete stochastic optimization problems'. Together they form a unique fingerprint.

Cite this