TY - GEN

T1 - Strength of weak learnability

AU - Schapire, Robert E.

N1 - Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.

PY - 1989

Y1 - 1989

N2 - The problem of improving the accuracy of a hypothesis output by a learning algorithm in the distribution-free (probably approximately correct, or PAC) learning model is considered. A concept class is learnable (or strongly learnable) if, given access to a source of examples from the unknown concept, the learner with high probability is able to output a hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce a hypothesis that performs only slightly better than random guessing. It is shown that these two notions of learnability are equivalent. An explicit method is described for directly converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences.

AB - The problem of improving the accuracy of a hypothesis output by a learning algorithm in the distribution-free (probably approximately correct, or PAC) learning model is considered. A concept class is learnable (or strongly learnable) if, given access to a source of examples from the unknown concept, the learner with high probability is able to output a hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce a hypothesis that performs only slightly better than random guessing. It is shown that these two notions of learnability are equivalent. An explicit method is described for directly converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences.

UR - http://www.scopus.com/inward/record.url?scp=0024768443&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0024768443&partnerID=8YFLogxK

U2 - 10.1109/sfcs.1989.63451

DO - 10.1109/sfcs.1989.63451

M3 - Conference contribution

AN - SCOPUS:0024768443

SN - 0818619821

SN - 9780818619823

T3 - Annual Symposium on Foundations of Computer Science (Proceedings)

SP - 28

EP - 33

BT - Annual Symposium on Foundations of Computer Science (Proceedings)

PB - Publ by IEEE

T2 - 30th Annual Symposium on Foundations of Computer Science

Y2 - 30 October 1989 through 1 November 1989

ER -