TY - GEN
T1 - Space-alternating attribute-distributed sparse learning
AU - Shutin, Dmitriy
AU - Zheng, Haipeng
AU - Fleury, Bernard H.
AU - Kulkarni, Sanjeev R.
AU - Poor, H. Vincent
PY - 2010
Y1 - 2010
N2 - The paper proposes a new variational Bayesian algorithm for multivariate regression with attribute-distributed or dimensionally distributed data. Compared to the existing approaches the proposed algorithm exploits the variational version of the Space-Alternating Generalized Expectation- Maximization (SAGE) algorithm that by means of admissible hidden data - an analog of the complete data in the EM framework-allows parameters of a single agent to be updated assuming that parameters of the other agents are fixed. This allows learning to be implemented in a distributed fashion by sequentially updating the agents one after another. Inspired by Bayesian sparsity techniques, the algorithm also introduces constraints on the agent parameters via parametric priors. This adds a mechanism for pruning irrelevant agents, as well as for minimizing the effect of overfitting. Using synthetic data, as well as measurement data from the UCI Machine Learning Repository it is demonstrated that the proposed algorithm outperforms existing solutions both in the achieved mean-square error (MSE), as well as in convergence speed due to the ability to sparsify noninformative agents, while at the same time allowing distributed implementation and flexible agent update protocols.
AB - The paper proposes a new variational Bayesian algorithm for multivariate regression with attribute-distributed or dimensionally distributed data. Compared to the existing approaches the proposed algorithm exploits the variational version of the Space-Alternating Generalized Expectation- Maximization (SAGE) algorithm that by means of admissible hidden data - an analog of the complete data in the EM framework-allows parameters of a single agent to be updated assuming that parameters of the other agents are fixed. This allows learning to be implemented in a distributed fashion by sequentially updating the agents one after another. Inspired by Bayesian sparsity techniques, the algorithm also introduces constraints on the agent parameters via parametric priors. This adds a mechanism for pruning irrelevant agents, as well as for minimizing the effect of overfitting. Using synthetic data, as well as measurement data from the UCI Machine Learning Repository it is demonstrated that the proposed algorithm outperforms existing solutions both in the achieved mean-square error (MSE), as well as in convergence speed due to the ability to sparsify noninformative agents, while at the same time allowing distributed implementation and flexible agent update protocols.
UR - http://www.scopus.com/inward/record.url?scp=78349278494&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=78349278494&partnerID=8YFLogxK
U2 - 10.1109/CIP.2010.5604254
DO - 10.1109/CIP.2010.5604254
M3 - Conference contribution
AN - SCOPUS:78349278494
SN - 9781424464593
T3 - 2010 2nd International Workshop on Cognitive Information Processing, CIP2010
SP - 209
EP - 214
BT - 2010 2nd International Workshop on Cognitive Information Processing, CIP2010
T2 - 2010 2nd International Workshop on Cognitive Information Processing, CIP2010
Y2 - 14 June 2010 through 16 June 2010
ER -