TY - GEN
T1 - Probabilistic backpropagation for scalable learning of Bayesian neural networks
AU - Hernández-Lobato, José Miguel
AU - Adams, Ryan P.
N1 - Publisher Copyright:
© Copyright 2015 by International Machine Learning Society (IMLS). All rights reserved.
PY - 2015
Y1 - 2015
N2 - Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Bayesian approach to learning neural networks does not have these problems. However, existing Bayesian techniques lack scalability to large dataset and network sizes. In this work we present a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP). Similar to classical backpropagation, PBP works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients. A series of experiments on ten real-world datasets show that PBP is significantly faster than other techniques, while offering competitive predictive abilities. Our experiments also show that PBP provides accurate estimates of the posterior variance on the network weights.
AB - Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Bayesian approach to learning neural networks does not have these problems. However, existing Bayesian techniques lack scalability to large dataset and network sizes. In this work we present a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP). Similar to classical backpropagation, PBP works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients. A series of experiments on ten real-world datasets show that PBP is significantly faster than other techniques, while offering competitive predictive abilities. Our experiments also show that PBP provides accurate estimates of the posterior variance on the network weights.
UR - http://www.scopus.com/inward/record.url?scp=84969909658&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84969909658&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84969909658
T3 - 32nd International Conference on Machine Learning, ICML 2015
SP - 1861
EP - 1869
BT - 32nd International Conference on Machine Learning, ICML 2015
A2 - Bach, Francis
A2 - Blei, David
PB - International Machine Learning Society (IMLS)
T2 - 32nd International Conference on Machine Learning, ICML 2015
Y2 - 6 July 2015 through 11 July 2015
ER -