TY - GEN
T1 - Fast adaptive variational sparse Bayesian learning with automatic relevance determination
AU - Shutin, Dmitriy
AU - Buchgraber, Thomas
AU - Kulkarni, Sanjeev R.
AU - Poor, H. Vincent
PY - 2011
Y1 - 2011
N2 - In this work a new adaptive fast variational sparse Bayesian learning (V-SBL) algorithm is proposed that is a variational counterpart of the fast marginal likelihood maximization approach to SBL. It allows one to adaptively construct a sparse regression or classification function as a linear combination of a few basis functions by minimizing the variational free energy. In the case of non-informative hyperpriors, also referred to as automatic relevance determination, the minimization of the free energy can be efficiently realized by computing the fixed points of the update expressions for the variational distribution of the sparsity parameters. The criteria that establish convergence to these fixed points, termed pruning conditions, allow an efficient addition or removal of basis functions; they also have a simple and intuitive interpretation in terms of a component's signal-to-noise ratio. It has been demonstrated that this interpretation allows a simple empirical adjustment of the pruning conditions, which in turn improves sparsity of SBL and drastically accelerates the convergence rate of the algorithm. The experimental evidence collected with synthetic data demonstrates the effectiveness of the proposed learning scheme.
AB - In this work a new adaptive fast variational sparse Bayesian learning (V-SBL) algorithm is proposed that is a variational counterpart of the fast marginal likelihood maximization approach to SBL. It allows one to adaptively construct a sparse regression or classification function as a linear combination of a few basis functions by minimizing the variational free energy. In the case of non-informative hyperpriors, also referred to as automatic relevance determination, the minimization of the free energy can be efficiently realized by computing the fixed points of the update expressions for the variational distribution of the sparsity parameters. The criteria that establish convergence to these fixed points, termed pruning conditions, allow an efficient addition or removal of basis functions; they also have a simple and intuitive interpretation in terms of a component's signal-to-noise ratio. It has been demonstrated that this interpretation allows a simple empirical adjustment of the pruning conditions, which in turn improves sparsity of SBL and drastically accelerates the convergence rate of the algorithm. The experimental evidence collected with synthetic data demonstrates the effectiveness of the proposed learning scheme.
UR - http://www.scopus.com/inward/record.url?scp=80051629562&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=80051629562&partnerID=8YFLogxK
U2 - 10.1109/ICASSP.2011.5946760
DO - 10.1109/ICASSP.2011.5946760
M3 - Conference contribution
AN - SCOPUS:80051629562
SN - 9781457705397
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 2180
EP - 2183
BT - 2011 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011 - Proceedings
T2 - 36th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011
Y2 - 22 May 2011 through 27 May 2011
ER -