We study online learning algorithms that predict by combining the predictions of several subordinate prediction algorithms, sometimes called `experts.' These simple algorithms belong to the multiplicative weights family of algorithms. The performance of these algorithms degrades only logarithmically with the number of experts, making them particularly useful in applications where the number of experts is very large. However, in applications such as text categorization, it is often natural for some of the experts to abstain from making predictions on some of the instances. We show how to transform algorithms that assume that all experts are always awake to algorithms that do not require this assumption. We also show how to derive corresponding loss bounds. Our method is very general, and can be applied to a large family of online learning algorithms. We also give applications to various prediction models including decision graphs and `switching' experts.
|Original language||English (US)|
|Number of pages||10|
|Journal||Conference Proceedings of the Annual ACM Symposium on Theory of Computing|
|State||Published - 1997|
|Event||Proceedings of the 1997 29th Annual ACM Symposium on Theory of Computing - El Paso, TX, USA|
Duration: May 4 1997 → May 6 1997
All Science Journal Classification (ASJC) codes