Abstract
Deep belief networks are a powerful way to model complex probability distributions. However, it is difficult to learn the structure of a belief network, particularly one with hidden units. The Indian buffet process has been used as a nonparametric Bayesian prior on the structure of a directed belief network with a single infinitely wide hidden layer. Here, we introduce the cascading Indian buffet process (CIBP), which provides a prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network framework to allow each unit to vary its behavior between discrete and continuous representations. We use Markov chain Monte Carlo for inference in this model and explore the structures learned on image data.
Original language | English (US) |
---|---|
Pages (from-to) | 1-8 |
Number of pages | 8 |
Journal | Journal of Machine Learning Research |
Volume | 9 |
State | Published - 2010 |
Externally published | Yes |
Event | 13th International Conference on Artificial Intelligence and Statistics, AISTATS 2010 - Sardinia, Italy Duration: May 13 2010 → May 15 2010 |
All Science Journal Classification (ASJC) codes
- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence