Optimal online learning for nonlinear belief models using discrete priors

Weidong Han, Warren B. Powell

Research output: Contribution to journalArticlepeer-review

3 Scopus citations


We consider an optimal learning problem where we are trying to learn a function that is nonlinear in unknown parameters in an online setting. We formulate the problem as a dynamic program, provide the optimality condition using Bellman’s equation, and propose a multiperiod lookahead policy to overcome the nonconcavity in the value of information. We adopt a sampled belief model, which we refer to as a discrete prior. For an infinite-horizon problem with discounted cumulative rewards, we prove asymptotic convergence properties under the proposed policy, a rare result for online learning. We then demonstrate the approach in three different settings: a health setting where we make medical decisions to maximize healthcare response over time, a dynamic pricing setting where we make pricing decisions to maximize the cumulative revenue, and a clinical pharmacology setting where we make dosage controls to minimize the deviation between actual and target effects.

Original languageEnglish (US)
Pages (from-to)1538-1556
Number of pages19
JournalOperations Research
Issue number5
StatePublished - Sep 2020

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Management Science and Operations Research


  • Dynamic program
  • Knowledge gradient
  • Multiarmed bandit
  • Multiperiod lookahead
  • Online learning
  • Optimal learning
  • Value of information


Dive into the research topics of 'Optimal online learning for nonlinear belief models using discrete priors'. Together they form a unique fingerprint.

Cite this