TY - JOUR
T1 - PASS-GLM
T2 - 31st Annual Conference on Neural Information Processing Systems, NIPS 2017
AU - Huggins, Jonathan H.
AU - Adams, Ryan P.
AU - Broderick, Tamara
N1 - Funding Information:
JHH and TB are supported in part by ONR grant N00014-17-1-2072, ONR MURI grant N00014-11-1-0688, and a Google Faculty Research Award. RPA is supported by NSF IIS-1421780 and the Alfred P. Sloan Foundation.
Publisher Copyright:
© 2017 Neural information processing systems foundation. All rights reserved.
PY - 2017
Y1 - 2017
N2 - Generalized linear models (GLMs) - such as logistic regression, Poisson regression, and robust regression - provide interpretable models for diverse data types. Probabilistic approaches, particularly Bayesian ones, allow coherent estimates of uncertainty, incorporation of prior information, and sharing of power across experiments via hierarchical models. In practice, however, the approximate Bayesian methods necessary for inference have either failed to scale to large data sets or failed to provide theoretical guarantees on the quality of inference. We propose a new approach based on constructing polynomial approximate sufficient statistics for GLMs (PASS-GLM). We demonstrate that our method admits a simple algorithm as well as trivial streaming and distributed extensions that do not compound error across computations. We provide theoretical guarantees on the quality of point (MAP) estimates, the approximate posterior, and posterior mean and uncertainty estimates. We validate our approach empirically in the case of logistic regression using a quadratic approximation and show competitive performance with stochastic gradient descent, MCMC, and the Laplace approximation in terms of speed and multiple measures of accuracy - including on an advertising data set with 40 million data points and 20, 000 covariates.
AB - Generalized linear models (GLMs) - such as logistic regression, Poisson regression, and robust regression - provide interpretable models for diverse data types. Probabilistic approaches, particularly Bayesian ones, allow coherent estimates of uncertainty, incorporation of prior information, and sharing of power across experiments via hierarchical models. In practice, however, the approximate Bayesian methods necessary for inference have either failed to scale to large data sets or failed to provide theoretical guarantees on the quality of inference. We propose a new approach based on constructing polynomial approximate sufficient statistics for GLMs (PASS-GLM). We demonstrate that our method admits a simple algorithm as well as trivial streaming and distributed extensions that do not compound error across computations. We provide theoretical guarantees on the quality of point (MAP) estimates, the approximate posterior, and posterior mean and uncertainty estimates. We validate our approach empirically in the case of logistic regression using a quadratic approximation and show competitive performance with stochastic gradient descent, MCMC, and the Laplace approximation in terms of speed and multiple measures of accuracy - including on an advertising data set with 40 million data points and 20, 000 covariates.
UR - http://www.scopus.com/inward/record.url?scp=85047005558&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85047005558&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85047005558
SN - 1049-5258
VL - 2017-December
SP - 3612
EP - 3622
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
Y2 - 4 December 2017 through 9 December 2017
ER -