### Abstract

Many algorithms for inferring a decision tree from data involve a two-phase process: First, a very large decision tree is grown which typically ends up "over-fitting" the data. To reduce over-fitting, in the second phase, the tree is pruned using one of a number of available methods. The final tree is then output and used for classification on test data. In this paper, we suggest an alternative approach to the pruning phase. Using a given unpruned decision tree, we present a new method of making predictions on test data, and we prove that our algorithm's performance will not be "much worse" (in a precise technical sense) than the predictions made by the best reasonably small pruning of the given decision tree. Thus, our procedure is guaranteed to be competitive (in terms of the quality of its predictions) with any pruning algorithm. We prove that our procedure is very efficient and highly robust. Our method can be viewed as a synthesis of two previously studied techniques. First, we apply Cesa-Bianchi et al.'s [3] results on predicting using "expert advice" (where we view each pruning as an "expert") to obtain an algorithm that has provably low prediction loss, but that is computationally infeasible. Next, we generalize and apply a method developed by Buntine [2, 1] and Willems, Shtarkov and Tjalkens [18, 19] to derive a very efficient implementation of this procedure.

Original language | English (US) |
---|---|

Title of host publication | Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995 |

Publisher | Association for Computing Machinery, Inc |

Pages | 61-68 |

Number of pages | 8 |

ISBN (Electronic) | 0897917235, 9780897917230 |

DOIs | |

State | Published - Jul 5 1995 |

Externally published | Yes |

Event | 8th Annual Conference on Computational Learning Theory, COLT 1995 - Santa Cruz, United States Duration: Jul 5 1995 → Jul 8 1995 |

### Publication series

Name | Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995 |
---|---|

Volume | 1995-January |

### Other

Other | 8th Annual Conference on Computational Learning Theory, COLT 1995 |
---|---|

Country | United States |

City | Santa Cruz |

Period | 7/5/95 → 7/8/95 |

### All Science Journal Classification (ASJC) codes

- Theoretical Computer Science
- Artificial Intelligence
- Software

## Fingerprint Dive into the research topics of 'Predicting nearly as well as the best pruning of a decision tree'. Together they form a unique fingerprint.

## Cite this

*Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995*(pp. 61-68). (Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995; Vol. 1995-January). Association for Computing Machinery, Inc. https://doi.org/10.1145/225298.225305