Abstract
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.
Original language | English (US) |
---|---|
Pages (from-to) | 255-276 |
Number of pages | 22 |
Journal | Machine Learning |
Volume | 18 |
Issue number | 2 |
DOIs | |
State | Published - Feb 1995 |
All Science Journal Classification (ASJC) codes
- Software
- Artificial Intelligence
Keywords
- PAC learning
- computational learning theory
- learning agents
- machine learning