Efficient learning of sparse ranking functions

Mark Stevens, Samy Bengio, Yoram Singer

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Algorithms for learning to rank can be inefficient when they employ risk functions that use structural information. We describe and analyze a learning algorithm that efficiently learns a ranking function using a domination loss. This loss is designed for problems in which we need to rank a small number of positive examples over a vast number of negative examples. In that context, we propose an efficient coordinate descent approach that scales linearly with the number of examples. We then present an extension that incorporates regularization, thus extending Vapnik’s notion of regularized empirical risk minimization to ranking learning. We also discuss an extension to the case of multi-value feedback. Experiments performed on several benchmark datasets and large-scale Google internal datasets demonstrate the effectiveness of the learning algorithm in constructing compact models while retaining the empirical performance accuracy.

Original languageEnglish (US)
Title of host publicationEmpirical Inference
Subtitle of host publicationFestschrift in Honor of Vladimir N. Vapnik
PublisherSpringer Berlin Heidelberg
Pages261-271
Number of pages11
ISBN (Electronic)9783642411366
ISBN (Print)9783642411359
DOIs
StatePublished - Jan 1 2013

All Science Journal Classification (ASJC) codes

  • General Computer Science

Fingerprint

Dive into the research topics of 'Efficient learning of sparse ranking functions'. Together they form a unique fingerprint.

Cite this