Optimal learning for nonlinear parametric belief models over multidimensional continuous spaces

Xinyu He, Yangzhou Hu, Warren Buckler Powell

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

We consider the optimal learning problem of optimizing an expensive function f(x; θ) with a known parametric form but unknown parameter vector θ. Observations of the function, which might involve simulations or laboratory or field experiments, are both expensive and noisy. Our goal is to simultaneously learn the true parameter θ while finding the optimal value of x. We develop an efficient implementation of a method, known as the knowledge gradient, that optimizes the value of information from each experiment. Our algorithm can handle x and θ that are multidimensional continuous vectors. We prove that our algorithm asymptotically learns the correct θ and the best value of x. Experiments show that the algorithm produces fast convergence, even in higher dimensions.

Original languageEnglish (US)
Pages (from-to)2945-2974
Number of pages30
JournalSIAM Journal on Optimization
Volume28
Issue number4
DOIs
StatePublished - 2018

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Applied Mathematics

Keywords

  • Knowledge gradient
  • Nonlinear parametric model
  • Optimal learning

Fingerprint

Dive into the research topics of 'Optimal learning for nonlinear parametric belief models over multidimensional continuous spaces'. Together they form a unique fingerprint.

Cite this