Convergence and consistency of regularized boosting with weakly dependent observations

Research output: Contribution to journalArticlepeer-review

5 Scopus citations


This paper studies the statistical convergence and consistency of regularized boosting methods, where the samples need not be independent and identically distributed but can come from stationary weakly dependent sequences. Consistency is proven for the composite classifiers that result from a regularization achieved by restricting the 1-norm of the base classifiers' weights. The less restrictive nature of sampling considered here is manifested in the consistency result through a generalized condition on the growth of the regularization parameter. The weaker the sample dependence, the faster the regularization parameter is allowed to grow with increasing sample size. A consistency result is also provided for data-dependent choices of the regularization parameter.

Original languageEnglish (US)
Article number6650087
Pages (from-to)651-660
Number of pages10
JournalIEEE Transactions on Information Theory
Issue number1
StatePublished - Jan 2014

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Library and Information Sciences


  • Bayes-risk consistency
  • beta-mixing
  • boosting
  • classification
  • dependent data
  • empirical processes
  • memory
  • non-iid
  • penalized model selection
  • regularization


Dive into the research topics of 'Convergence and consistency of regularized boosting with weakly dependent observations'. Together they form a unique fingerprint.

Cite this