TY - GEN
T1 - Stationary point variational Bayesian attribute-distributed sparse learning with l 1 sparsity constraints
AU - Shutin, Dmitriy
AU - Kulkarni, Sanjeev R.
AU - Poor, H. Vincent
PY - 2011
Y1 - 2011
N2 - The paper proposes a new variational Bayesian algorithm for 1-penalized multivariate regression with attribute-distributed data. The algorithm is based on the variational Bayesian version of the SAGE algorithm that realizes a training of individual agents in a distributed fashion and sparse Bayesian learning (SBL) with hierarchical sparsity prior modeling of the agent weights. The SBL introduces constraints on the weights of individual agents, thus reducing the effects of overfitting and removing/suppressing poorly performing agents in the ensemble estimator. The 1 constraint is introduced using a product of a Gaussian and an exponential probability density function with the resulting marginalized prior being a Laplace pdf. Such a hierarchical formulation of the prior allows for a computation of the stationary points of the variational update expressions for prior parameters, as well as deriving conditions that ensure convergence to these stationary points. Using synthetic data it is demonstrated that the proposed algorithm performs very well in terms of the achieved MSE, and outperforms other algorithms in the ability to sparsify non-informative agents, while at the same time allowing distributed implementation and flexible agent update protocols.
AB - The paper proposes a new variational Bayesian algorithm for 1-penalized multivariate regression with attribute-distributed data. The algorithm is based on the variational Bayesian version of the SAGE algorithm that realizes a training of individual agents in a distributed fashion and sparse Bayesian learning (SBL) with hierarchical sparsity prior modeling of the agent weights. The SBL introduces constraints on the weights of individual agents, thus reducing the effects of overfitting and removing/suppressing poorly performing agents in the ensemble estimator. The 1 constraint is introduced using a product of a Gaussian and an exponential probability density function with the resulting marginalized prior being a Laplace pdf. Such a hierarchical formulation of the prior allows for a computation of the stationary points of the variational update expressions for prior parameters, as well as deriving conditions that ensure convergence to these stationary points. Using synthetic data it is demonstrated that the proposed algorithm performs very well in terms of the achieved MSE, and outperforms other algorithms in the ability to sparsify non-informative agents, while at the same time allowing distributed implementation and flexible agent update protocols.
KW - Attribute-distributed learning
KW - sparse Bayesian learning
KW - variational Bayesian inference
UR - http://www.scopus.com/inward/record.url?scp=84857162221&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84857162221&partnerID=8YFLogxK
U2 - 10.1109/CAMSAP.2011.6136003
DO - 10.1109/CAMSAP.2011.6136003
M3 - Conference contribution
AN - SCOPUS:84857162221
SN - 9781457721052
T3 - 2011 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2011
SP - 277
EP - 280
BT - 2011 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2011
T2 - 2011 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2011
Y2 - 13 December 2011 through 16 December 2011
ER -