Abstract
Algorithms for learning to rank can be inefficient when they employ risk functions that use structural information. We describe and analyze a learning algorithm that efficiently learns a ranking function using a domination loss. This loss is designed for problems in which we need to rank a small number of positive examples over a vast number of negative examples. In that context, we propose an efficient coordinate descent approach that scales linearly with the number of examples. We then present an extension that incorporates regularization, thus extending Vapnik’s notion of regularized empirical risk minimization to ranking learning. We also discuss an extension to the case of multi-value feedback. Experiments performed on several benchmark datasets and large-scale Google internal datasets demonstrate the effectiveness of the learning algorithm in constructing compact models while retaining the empirical performance accuracy.
Original language | English (US) |
---|---|
Title of host publication | Empirical Inference |
Subtitle of host publication | Festschrift in Honor of Vladimir N. Vapnik |
Publisher | Springer Berlin Heidelberg |
Pages | 261-271 |
Number of pages | 11 |
ISBN (Electronic) | 9783642411366 |
ISBN (Print) | 9783642411359 |
DOIs | |
State | Published - Jan 1 2013 |
All Science Journal Classification (ASJC) codes
- General Computer Science