Abstract
In this paper, an algorithm is developed for collaboratively training networks of kernel-linear least-squares regression estimators. The algorithm is shown to distributively solve a relaxation of the classical centralized least-squares regression problem. A statistical analysis shows that the generalization error afforded agents by the collaborative training algorithm can be bounded in terms of the relationship between the network topology and the representational capacity of the relevant reproducing kernel Hilbert space. Numerical experiments suggest that the algorithm is effective at reducing noise. The algorithm is relevant to the problem of distributed learning in wireless sensor networks by virtue of its exploitation of local communication. Several new questions for statistical learning theory are proposed.
Original language | English (US) |
---|---|
Pages (from-to) | 1856-1871 |
Number of pages | 16 |
Journal | IEEE Transactions on Information Theory |
Volume | 55 |
Issue number | 4 |
DOIs | |
State | Published - 2009 |
All Science Journal Classification (ASJC) codes
- Information Systems
- Computer Science Applications
- Library and Information Sciences
Keywords
- Collaboration
- Distributed learning
- Empirical risk minimization
- Kernel methods
- Learning
- Nonparametric
- Sensor networks