Learning mixtures of arbitrary gaussians

Sanjeev Arora, R. Kannan

Research output: Contribution to journalConference article

103 Scopus citations

Abstract

Mixtures of gaussian (or normal) distributions arise in a variety of application areas. Many techniques have been proposed for the task of finding the component gaussians given samples from the mixture, such as the EM algorithm, a local-search heuristic from Dempster, Laird and Rubin (1977). However, such heuristics are known to require time exponential in the dimension (i.e., number of variables) in the worst case, even when the number of components is 2. This paper presents the first algorithm that provably learns the component gaussians in time that is polynomial in the dimension. The gaussians may have arbitrary shape provided they satisfy a "nondegeneracy" condition, which requires their high-probability regions to be not "too close" together.

Original languageEnglish (US)
Pages (from-to)247-257
Number of pages11
JournalConference Proceedings of the Annual ACM Symposium on Theory of Computing
StatePublished - Jan 1 2001
Event33rd Annual ACM Symposium on Theory of Computing - Creta, Greece
Duration: Jul 6 2001Jul 8 2001

All Science Journal Classification (ASJC) codes

  • Software

Keywords

  • Clustering
  • Gaussians
  • Learning
  • Mixture distributions

Fingerprint Dive into the research topics of 'Learning mixtures of arbitrary gaussians'. Together they form a unique fingerprint.

  • Cite this