Predicting nearly as well as the best pruning of a decision tree

David P. Helmbold, Robert E. Schapire

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations

Abstract

Many algorithms for inferring a decision tree from data involve a two-phase process: First, a very large decision tree is grown which typically ends up "over-fitting" the data. To reduce over-fitting, in the second phase, the tree is pruned using one of a number of available methods. The final tree is then output and used for classification on test data. In this paper, we suggest an alternative approach to the pruning phase. Using a given unpruned decision tree, we present a new method of making predictions on test data, and we prove that our algorithm's performance will not be "much worse" (in a precise technical sense) than the predictions made by the best reasonably small pruning of the given decision tree. Thus, our procedure is guaranteed to be competitive (in terms of the quality of its predictions) with any pruning algorithm. We prove that our procedure is very efficient and highly robust. Our method can be viewed as a synthesis of two previously studied techniques. First, we apply Cesa-Bianchi et al.'s [3] results on predicting using "expert advice" (where we view each pruning as an "expert") to obtain an algorithm that has provably low prediction loss, but that is computationally infeasible. Next, we generalize and apply a method developed by Buntine [2, 1] and Willems, Shtarkov and Tjalkens [18, 19] to derive a very efficient implementation of this procedure.

Original languageEnglish (US)
Title of host publicationProceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995
PublisherAssociation for Computing Machinery, Inc
Pages61-68
Number of pages8
ISBN (Electronic)0897917235, 9780897917230
DOIs
StatePublished - Jul 5 1995
Externally publishedYes
Event8th Annual Conference on Computational Learning Theory, COLT 1995 - Santa Cruz, United States
Duration: Jul 5 1995Jul 8 1995

Publication series

NameProceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995
Volume1995-January

Other

Other8th Annual Conference on Computational Learning Theory, COLT 1995
CountryUnited States
CitySanta Cruz
Period7/5/957/8/95

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Artificial Intelligence
  • Software

Fingerprint Dive into the research topics of 'Predicting nearly as well as the best pruning of a decision tree'. Together they form a unique fingerprint.

  • Cite this

    Helmbold, D. P., & Schapire, R. E. (1995). Predicting nearly as well as the best pruning of a decision tree. In Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995 (pp. 61-68). (Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995; Vol. 1995-January). Association for Computing Machinery, Inc. https://doi.org/10.1145/225298.225305