The paper proposes a new variational Bayesian algorithm for 1-penalized multivariate regression with attribute-distributed data. The algorithm is based on the variational Bayesian version of the SAGE algorithm that realizes a training of individual agents in a distributed fashion and sparse Bayesian learning (SBL) with hierarchical sparsity prior modeling of the agent weights. The SBL introduces constraints on the weights of individual agents, thus reducing the effects of overfitting and removing/suppressing poorly performing agents in the ensemble estimator. The 1 constraint is introduced using a product of a Gaussian and an exponential probability density function with the resulting marginalized prior being a Laplace pdf. Such a hierarchical formulation of the prior allows for a computation of the stationary points of the variational update expressions for prior parameters, as well as deriving conditions that ensure convergence to these stationary points. Using synthetic data it is demonstrated that the proposed algorithm performs very well in terms of the achieved MSE, and outperforms other algorithms in the ability to sparsify non-informative agents, while at the same time allowing distributed implementation and flexible agent update protocols.