Abstract
We propose a sequential learning policy for noisy discrete global optimization and ranking and selection (R&S) problems with high dimensional sparse belief functions, where there are hundreds or even thousands of features, but only a small portion of these features contain explanatory power. Our problem setting, motivated by the experimental sciences, arises where we have to choose which experiment to run next. Here the experiments are time-consuming and expensive. We derive a sparse knowledge gradient (SpKG) decision policy based on the ℓ1-penalized regression Lasso to identify the sparsity pattern before our budget is exhausted. This policy is a unique and novel hybrid of Bayesian R&S with a frequentist learning approach. Theoretically, we provide the error bound of the posterior mean estimate, which has shown to be at the minimax optimal √slog p/n rate. Controlled experiments on both synthetic data and real application for automatically designing experiments to identify the structure of an RNA molecule show that the algorithm efficiently learns the correct set of nonzero parameters. It also outperforms several other learning policies.
Original language | English (US) |
---|---|
Pages | 417-425 |
Number of pages | 9 |
State | Published - Jan 1 2016 |
Event | 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016 - Cadiz, Spain Duration: May 9 2016 → May 11 2016 |
Conference
Conference | 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016 |
---|---|
Country/Territory | Spain |
City | Cadiz |
Period | 5/9/16 → 5/11/16 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Statistics and Probability