### Abstract

We propose an algorithm called query by committee, in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement. The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms.

Original language | English (US) |
---|---|

Title of host publication | Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory |

Publisher | Publ by ACM |

Pages | 287-294 |

Number of pages | 8 |

ISBN (Print) | 089791497X, 9780897914970 |

DOIs | |

State | Published - Jan 1 1992 |

Externally published | Yes |

Event | Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory - Pittsburgh, PA, USA Duration: Jul 27 1992 → Jul 29 1992 |

### Publication series

Name | Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory |
---|

### Other

Other | Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory |
---|---|

City | Pittsburgh, PA, USA |

Period | 7/27/92 → 7/29/92 |

### All Science Journal Classification (ASJC) codes

- Engineering(all)

## Fingerprint Dive into the research topics of 'Query by committee'. Together they form a unique fingerprint.

## Cite this

*Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory*(pp. 287-294). (Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory). Publ by ACM. https://doi.org/10.1145/130385.130417