The Strength of Weak Learnability

Research output: Contribution to journalArticlepeer-review

3512 Scopus citations

Abstract

This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distribution-free (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a source of examples of the unknown concept, the learner with high probability is able to output an hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce an hypothesis that performs only slightly better than random guessing. In this paper, it is shown that these two notions of learnability are equivalent. A method is described for converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences, including a set of general upper bounds on the complexity of any strong learning algorithm as a function of the allowed error ∈.

Original languageEnglish (US)
Pages (from-to)197-227
Number of pages31
JournalMachine Learning
Volume5
Issue number2
DOIs
StatePublished - Jun 1990
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence

Keywords

  • Machine learning
  • PAC learning
  • learnability theory
  • learning from examples
  • polynomial-time identification

Fingerprint

Dive into the research topics of 'The Strength of Weak Learnability'. Together they form a unique fingerprint.

Cite this