Abstract
We describe a framework for solving regression problems by reduction to classification. Our reduction is based on symmetrization of margin-based loss functions commonly used in boosting algorithms, namely, the logistic loss and the exponential loss. Our construction yields a smooth version of the ε-insensitive hinge loss that is used in support vector regression. A byproduct of this construction is a new simple form of regularization for boosting-based classification and regression algorithms. We present two parametric families of batch learning algorithms for minimizing these losses. The first family employs a log-additive update and is based on recent boosting algorithms while the second family uses a new form of additive update. We also describe and analyze online gradient descent (GD) and exponentiated gradient (EG) algorithms for the ε-insensitive logistic loss. Our regression framework also has implications on classification algorithms, namely, a new additive batch algorithm for the log-loss and exp-loss used in boosting.
Original language | English (US) |
---|---|
Pages (from-to) | 433-447 |
Number of pages | 15 |
Journal | Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) |
Volume | 2777 |
DOIs | |
State | Published - 2003 |
Event | 16th Annual Conference on Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003 - Washington, DC, United States Duration: Aug 24 2003 → Aug 27 2003 |
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- General Computer Science