Learning the structure of deep sparse graphical models

Ryan Prescott Adams, Hanna M. Wallach, Zoubin Ghahramani

Research output: Contribution to journalConference article

23 Scopus citations

Abstract

Deep belief networks are a powerful way to model complex probability distributions. However, it is difficult to learn the structure of a belief network, particularly one with hidden units. The Indian buffet process has been used as a nonparametric Bayesian prior on the structure of a directed belief network with a single infinitely wide hidden layer. Here, we introduce the cascading Indian buffet process (CIBP), which provides a prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network framework to allow each unit to vary its behavior between discrete and continuous representations. We use Markov chain Monte Carlo for inference in this model and explore the structures learned on image data.

Original languageEnglish (US)
Pages (from-to)1-8
Number of pages8
JournalJournal of Machine Learning Research
Volume9
StatePublished - Dec 1 2010
Externally publishedYes
Event13th International Conference on Artificial Intelligence and Statistics, AISTATS 2010 - Sardinia, Italy
Duration: May 13 2010May 15 2010

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Learning the structure of deep sparse graphical models'. Together they form a unique fingerprint.

  • Cite this