It is well known that the accuracy of classifiers strongly depends on the distribution of the data. Consequently, a versatile classifier with a broad range of design parameters is better able to cope with various scenarios encountered in real-world applications. Kung    presented such a classifier named Ridge-SVM which incorporates the advantages of both Kernel Ridge Regression and Support Vector Machines by combining their regularization mechanisms for enhancing robustness. In this paper this novel classifier was tested on four different datasets and an optimal combination of parameters was identified. Furthermore, the influence of the parameter choice on the training time was quantified and methods to efficiently tune the parameters are presented. This prior knowledge about how each parameter influences the training is especially important for big data applications where the training time becomes the bottleneck as well as for applications in which the algorithm is regularly trained on new data.