Abstract
We consider the problem of learning to predict as well as the best in a group of experts making continuous predictions. We assume the learning algorithm has prior knowledge of the maximum number of mistakes of the best expert. We propose a new master strategy that achieves the best known performance for on-line learning with continuous experts in the mistake bounded model. Our ideas are based on drifting games, a generalization of boosting and on-line learning algorithms. We prove new lower bounds based on the drifting games framework which, though not as tight as previous bounds, have simpler proofs and do not require an enormous number of experts. We also extend previous lower bounds to show that our upper bounds are exactly tight for sufficiently many experts. A surprising consequence of our work is that continuous experts are only as powerful as experts making binary or no prediction in each round.
Original language | English (US) |
---|---|
Pages (from-to) | 2670-2683 |
Number of pages | 14 |
Journal | Theoretical Computer Science |
Volume | 411 |
Issue number | 29-30 |
DOIs | |
State | Published - Jun 17 2010 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- General Computer Science
Keywords
- Drifting game
- Expert advice
- On-line learning