Abstract
Motivated by sensor networks and other distributed settings, several models for distributed learning are presented. The models differ from classical works in statistical pattern recognition by allocating observations of an i.i.d. sampling process amongst members of a network of learning agents. The agents are limited in their ability to communicate to a fusion center; the amount of information available for classification or regression is constrained. For several simple communication models, questions of universal consistency are addressed; i.e., the asymptotics of several agent decision rules and fusion rules are considered in both binary classification and regression frameworks. These models resemble distributed environments and introduce new questions regarding universal consistency. Insofar as these models offer a useful picture of distributed scenarios, this paper considers whether the guarantees provided by Stone's Theorem in centralized environments hold in distributed settings.
Original language | English (US) |
---|---|
Pages (from-to) | 442-456 |
Number of pages | 15 |
Journal | Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) |
Volume | 3120 |
DOIs | |
State | Published - 2004 |
Event | 17th Annual Conference on Learning Theory, COLT 2004 - Banff, Canada Duration: Jul 1 2004 → Jul 4 2004 |
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- General Computer Science