We study the convergence and consistency of Boosting algorithms for classification. The standard method, as the sample size increases say from m to m +1, is to re-initialize the Boosting algorithm with an arbitrary prediction rule. In contrast to this "batch" approach, we propose a boosting procedure that is recursive in the sense that for sample size m + 1, the algorithm is re-started with the composite classifier that was obtained for sample size mata specific point, the linking point. We adopt the regularization technique of early stopping, which consists in stopping the procedure based on the 1-norm of the composite classifier. We prove that such recursive boosting methods achieve consistency provided certain stopping and linking points criteria are met. We show that these conditions can be satisfied for widely used loss functions.