Abstract
We present a method for maintaining mixtures of prunings of a prediction or decision tree that extends the `node-based' prunings of [Bun90, WST95, HS95] to the larger class of edge-based prunings. The method includes an efficient online weight allocation algorithm that can be used for prediction, compression and classification. Although the set of edge-based prunings of a given tree is much larger than that of node-based prunings, our algorithm has similar space and time complexity to that of previous mixture algorithms for trees. Using the general on-line framework of Freund and Schapire [FS95], we prove that our algorithm maintains correctly the mixture weights for edge-based prunings with any bounded loss function. We also give a similar algorithm for the logarithmic loss function with a corresponding weight allocation algorithm. Finally, we describe experiments comparing node-based and edge-based mixture models for estimating the probability of the next word in English text, which show the advantages of edge-based models.
Original language | English (US) |
---|---|
Pages | 114-121 |
Number of pages | 8 |
DOIs | |
State | Published - 1997 |
Event | Proceedings of the 1997 10th Annual Conference on Computational Learning Theory - Nashville, TN, USA Duration: Jul 6 1997 → Jul 9 1997 |
Other
Other | Proceedings of the 1997 10th Annual Conference on Computational Learning Theory |
---|---|
City | Nashville, TN, USA |
Period | 7/6/97 → 7/9/97 |
All Science Journal Classification (ASJC) codes
- Computational Mathematics