TY - JOUR
T1 - Pegasos
T2 - Primal estimated sub-gradient solver for SVM
AU - Shalev-Shwartz, Shai
AU - Singer, Yoram
AU - Srebro, Nathan
AU - Cotter, Andrew
N1 - Copyright:
Copyright 2011 Elsevier B.V., All rights reserved.
PY - 2011/3
Y1 - 2011/3
N2 - We describe and analyze a simple and effective stochastic sub-gradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ε is ̃O(1/ε), where each iteration operates on a single training example. In contrast, previous analyses of stochastic gradient descent methods for SVMs require Ω(1/ε2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1/λ, where λ is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is ̃O(d/(\λ\ε)), where d is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach also extends to non-linear kernels while working solely on the primal objective function, though in this case the runtime does depend linearly on the training set size. Our algorithm is particularly well suited for large text classification problems, where we demonstrate an order-of-magnitude speedup over previous SVM learning methods.
AB - We describe and analyze a simple and effective stochastic sub-gradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ε is ̃O(1/ε), where each iteration operates on a single training example. In contrast, previous analyses of stochastic gradient descent methods for SVMs require Ω(1/ε2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1/λ, where λ is the regularization parameter of SVM. For a linear kernel, the total run-time of our method is ̃O(d/(\λ\ε)), where d is a bound on the number of non-zero features in each example. Since the run-time does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach also extends to non-linear kernels while working solely on the primal objective function, though in this case the runtime does depend linearly on the training set size. Our algorithm is particularly well suited for large text classification problems, where we demonstrate an order-of-magnitude speedup over previous SVM learning methods.
KW - SVM
KW - Stochastic gradient descent
UR - http://www.scopus.com/inward/record.url?scp=79952748054&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=79952748054&partnerID=8YFLogxK
U2 - 10.1007/s10107-010-0420-4
DO - 10.1007/s10107-010-0420-4
M3 - Article
AN - SCOPUS:79952748054
SN - 0025-5610
VL - 127
SP - 3
EP - 30
JO - Mathematical Programming
JF - Mathematical Programming
IS - 1
ER -