Learning algorithms for separable approximations of discrete stochastic optimization problems

Warren Buckler Powell, Andrzej Ruszczyński, Huseyin Topaloglu

Research output: Contribution to journalArticlepeer-review

75 Scopus citations

Abstract

We propose the use of sequences of separable, piecewise linear approximations for solving nondifferentiable stochastic optimization problems. The approximations are constructed adaptively using a combination of stochastic subgradient information and possibly sample information on the objective function itself. We prove the convergence of several versions of such methods when the objective function is separable and has integer break points, and we illustrate their behavior on numerical examples. We then demonstrate the performance on nonseparable problems that arise in the context of two-stage stochastic programming problems, and demonstrate that these techniques provide near-optimal solutions with a very fast rate of convergence compared with other solution techniques.

Original languageEnglish (US)
Pages (from-to)814-836
Number of pages23
JournalMathematics of Operations Research
Volume29
Issue number4
DOIs
StatePublished - Nov 2004

All Science Journal Classification (ASJC) codes

  • General Mathematics
  • Computer Science Applications
  • Management Science and Operations Research

Keywords

  • Dynamic programming
  • Stochastic programming

Fingerprint

Dive into the research topics of 'Learning algorithms for separable approximations of discrete stochastic optimization problems'. Together they form a unique fingerprint.

Cite this