Strength of weak learnability

Research output: Chapter in Book/Report/Conference proceedingConference contribution

41 Scopus citations


The problem of improving the accuracy of a hypothesis output by a learning algorithm in the distribution-free (probably approximately correct, or PAC) learning model is considered. A concept class is learnable (or strongly learnable) if, given access to a source of examples from the unknown concept, the learner with high probability is able to output a hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce a hypothesis that performs only slightly better than random guessing. It is shown that these two notions of learnability are equivalent. An explicit method is described for directly converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences.

Original languageEnglish (US)
Title of host publicationAnnual Symposium on Foundations of Computer Science (Proceedings)
PublisherPubl by IEEE
Number of pages6
ISBN (Print)0818619821, 9780818619823
StatePublished - 1989
Externally publishedYes
Event30th Annual Symposium on Foundations of Computer Science - Research Triangle Park, NC, USA
Duration: Oct 30 1989Nov 1 1989

Publication series

NameAnnual Symposium on Foundations of Computer Science (Proceedings)
ISSN (Print)0272-5428


Other30th Annual Symposium on Foundations of Computer Science
CityResearch Triangle Park, NC, USA

All Science Journal Classification (ASJC) codes

  • Hardware and Architecture


Dive into the research topics of 'Strength of weak learnability'. Together they form a unique fingerprint.

Cite this