Attribute-distributed learning: Models, limits, and algorithms

Research output: Contribution to journalArticlepeer-review

40 Scopus citations


This paper introduces a framework for distributed learning (regression) on attribute-distributed data. First, the convergence properties of attribute-distributed regression with an additive model and a fusion center are discussed, and the convergence rate and uniqueness of the limit are shown for some special cases. Then, taking residual refitting (or boosting) as a prototype algorithm, three different schemes, Simple Iterative Projection, a greedy algorithm, and a parallel algorithm (with its derivatives), are proposed and compared. Among these algorithms, the first two are sequential and have low communication overhead, but are susceptible to overtraining. The parallel algorithm has the best performance, but has significant communication requirements. Instead of directly refitting the ensemble residual sequentially, the parallel algorithm redistributes the residual to each agent in proportion to the coefficients of the optimal linear combination of the current individual estimators. Designing residual redistribution schemes also improves the ability to eliminate irrelevant attributes. The performance of the algorithms is compared via extensive simulations. Communication issues are also considered: the amount of data to be exchanged among the three algorithms is compared, and the three methods are generalized to scenarios without a fusion center.

Original languageEnglish (US)
Article number5605268
Pages (from-to)386-398
Number of pages13
JournalIEEE Transactions on Signal Processing
Issue number1
StatePublished - 2011

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Electrical and Electronic Engineering


  • Distributed information systems
  • distributed processing
  • statistical learning


Dive into the research topics of 'Attribute-distributed learning: Models, limits, and algorithms'. Together they form a unique fingerprint.

Cite this