Abstract
Datasets are growing not just in size but in complexity, creating a demand for rich models and quantification of uncertainty. Bayesian methods are an excellent fit for this demand, but scaling Bayesian inference is a challenge. In response to this challenge, there has been considerable recent work based on varying assumptions about model structure, underlying computational resources, and the importance of asymptotic correctness. As a result, there is a zoo of ideas with a wide range of assumptions and applicability. In this paper, we seek to identify unifying principles, patterns, and intuitions for scaling Bayesian inference. We review existing work on utilizing modern computing resources with both MCMC and variational approximation techniques. From this taxonomy of ideas, we characterize the general principles that have proven successful for designing scalable inference procedures and comment on the path forward.
Original language | English (US) |
---|---|
Pages (from-to) | 119-247 |
Number of pages | 129 |
Journal | Foundations and Trends in Machine Learning |
Volume | 9 |
Issue number | 2-3 |
DOIs | |
State | Published - 2016 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Software
- Human-Computer Interaction
- Artificial Intelligence