Provable learning of noisy-or networks

Sanjeev Arora, Rong Ge, Tengyu Ma, Andrej Risteski

Research output: Chapter in Book/Report/Conference proceedingConference contribution

11 Scopus citations

Abstract

Many machine learning applications use latent variable models to explain structure in data, whereby visible variables (= coordinates of the given datapoint) are explained as a probabilistic function of some hidden variables. Finding parameters with the maximum likelihood is NP-hard even in very simple settings. In recent years, provably efficient algorithms were nevertheless developed for models with linear structures: topic models, mixture models, hidden Markov models, etc. These algorithms use matrix or tensor decomposition, and make some reasonable assumptions about the parameters of the underlying model. But matrix or tensor decomposition seems of little use when the latent variable model has nonlinearities. The current paper shows how to make progress: tensor decomposition is applied for learning the single-layer noisy or network, which is a textbook example of a Bayes net, and used for example in the classic QMR-DT software for diagnosing which disease(s) a patient may have by observing the symptoms he/she exhibits. The technical novelty here, which should be useful in other settings in future, is analysis of tensor decomposition in presence of systematic error (i.e., where the noise/error is correlated with the signal, and doesn't decrease as number of samples goes to infinity). This requires rethinking all steps of tensor decomposition methods from the ground up. For simplicity our analysis is stated assuming that the network parameters were chosen from a probability distribution but the method seems more generally applicable.

Original languageEnglish (US)
Title of host publicationSTOC 2017 - Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing
EditorsPierre McKenzie, Valerie King, Hamed Hatami
PublisherAssociation for Computing Machinery
Pages1057-1066
Number of pages10
ISBN (Electronic)9781450345286
DOIs
StatePublished - Jun 19 2017
Event49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017 - Montreal, Canada
Duration: Jun 19 2017Jun 23 2017

Publication series

NameProceedings of the Annual ACM Symposium on Theory of Computing
VolumePart F128415
ISSN (Print)0737-8017

Other

Other49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017
Country/TerritoryCanada
CityMontreal
Period6/19/176/23/17

All Science Journal Classification (ASJC) codes

  • Software

Keywords

  • Davis-Kahan
  • Log-linear
  • Noisy-or
  • Tensor decomposition
  • Unsupervised learning

Fingerprint

Dive into the research topics of 'Provable learning of noisy-or networks'. Together they form a unique fingerprint.

Cite this