TY - GEN
T1 - Parameter design tradeoff between prediction performance and training time for Ridge-SVM
AU - Rothe, Rasmus
AU - Yu, Yinan
AU - Kung, S. Y.
N1 - Copyright:
Copyright 2014 Elsevier B.V., All rights reserved.
PY - 2013
Y1 - 2013
N2 - It is well known that the accuracy of classifiers strongly depends on the distribution of the data. Consequently, a versatile classifier with a broad range of design parameters is better able to cope with various scenarios encountered in real-world applications. Kung [1] [2] [3] presented such a classifier named Ridge-SVM which incorporates the advantages of both Kernel Ridge Regression and Support Vector Machines by combining their regularization mechanisms for enhancing robustness. In this paper this novel classifier was tested on four different datasets and an optimal combination of parameters was identified. Furthermore, the influence of the parameter choice on the training time was quantified and methods to efficiently tune the parameters are presented. This prior knowledge about how each parameter influences the training is especially important for big data applications where the training time becomes the bottleneck as well as for applications in which the algorithm is regularly trained on new data.
AB - It is well known that the accuracy of classifiers strongly depends on the distribution of the data. Consequently, a versatile classifier with a broad range of design parameters is better able to cope with various scenarios encountered in real-world applications. Kung [1] [2] [3] presented such a classifier named Ridge-SVM which incorporates the advantages of both Kernel Ridge Regression and Support Vector Machines by combining their regularization mechanisms for enhancing robustness. In this paper this novel classifier was tested on four different datasets and an optimal combination of parameters was identified. Furthermore, the influence of the parameter choice on the training time was quantified and methods to efficiently tune the parameters are presented. This prior knowledge about how each parameter influences the training is especially important for big data applications where the training time becomes the bottleneck as well as for applications in which the algorithm is regularly trained on new data.
KW - Ridge-SVM
KW - parameter tuning
KW - training time
KW - unified model for supervised learning
KW - weight-error-curve (WEC)
UR - http://www.scopus.com/inward/record.url?scp=84893257716&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84893257716&partnerID=8YFLogxK
U2 - 10.1109/MLSP.2013.6661962
DO - 10.1109/MLSP.2013.6661962
M3 - Conference contribution
AN - SCOPUS:84893257716
SN - 9781479911806
T3 - IEEE International Workshop on Machine Learning for Signal Processing, MLSP
BT - 2013 IEEE International Workshop on Machine Learning for Signal Processing - Proceedings of MLSP 2013
T2 - 2013 16th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2013
Y2 - 22 September 2013 through 25 September 2013
ER -