Title :
The strength of weak learnability
Author :
Schapire, Robert E.
Author_Institution :
MIT Lab. for Comput. Sci., Cambridge, MA, USA
fDate :
30 Oct-1 Nov 1989
Abstract :
The problem of improving the accuracy of a hypothesis output by a learning algorithm in the distribution-free learning model is considered. A concept class is learnable (or strongly learnable) if, given access to a source of examples from the unknown concept, the learner with high probability is able to output a hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class is weakly learnable if the learner can produce a hypothesis that forms only slightly better than random guessing. It is shown that these two notions of learnability are equivalent. An explicit method is described for directly converting a weak learning algorithm into one that achieves arbitrarily high accuracy. This construction may have practical applications as a tool for efficiently converting a mediocre learning algorithm into one that performs extremely well. In addition, the construction has some interesting theoretical consequences
Keywords :
computational complexity; equivalence classes; learning systems; concept class; distribution-free learning model; equivalent; learning algorithm; probability; source of examples; unknown concept; weak learnability; Boolean functions; Boosting; Computer science; Filtering; Laboratories; Polynomials; Upper bound;
Conference_Titel :
Foundations of Computer Science, 1989., 30th Annual Symposium on
Conference_Location :
Research Triangle Park, NC
Print_ISBN :
0-8186-1982-1
DOI :
10.1109/SFCS.1989.63451