Convergence and consistency of regularized boosting with weakly dependent observations

Aurelie C. Lozano, Sanjeev R. Kulkarni, Robert E. Schapire

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

This paper studies the statistical convergence and consistency of regularized boosting methods, where the samples need not be independent and identically distributed but can come from stationary weakly dependent sequences. Consistency is proven for the composite classifiers that result from a regularization achieved by restricting the 1-norm of the base classifiers' weights. The less restrictive nature of sampling considered here is manifested in the consistency result through a generalized condition on the growth of the regularization parameter. The weaker the sample dependence, the faster the regularization parameter is allowed to grow with increasing sample size. A consistency result is also provided for data-dependent choices of the regularization parameter.

Original languageEnglish (US)
Article number6650087
Pages (from-to)651-660
Number of pages10
JournalIEEE Transactions on Information Theory
Volume60
Issue number1
DOIs
StatePublished - Jan 2014

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Library and Information Sciences

Keywords

  • Bayes-risk consistency
  • beta-mixing
  • boosting
  • classification
  • dependent data
  • empirical processes
  • memory
  • non-iid
  • penalized model selection
  • regularization

Fingerprint Dive into the research topics of 'Convergence and consistency of regularized boosting with weakly dependent observations'. Together they form a unique fingerprint.

Cite this