Learning the parts of objects by non-negative matrix factorization

Daniel D. Lee, H. Sebastian Seung

Research output: Contribution to journalArticlepeer-review

9481 Scopus citations

Abstract

Is perception of the whole based on perception of its parts? There is psychological and physiological evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts- based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.

Original languageEnglish (US)
Pages (from-to)788-791
Number of pages4
JournalNature
Volume401
Issue number6755
DOIs
StatePublished - Oct 21 1999
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • General

Fingerprint

Dive into the research topics of 'Learning the parts of objects by non-negative matrix factorization'. Together they form a unique fingerprint.

Cite this