Kernel SODA: A feature reduction technique using kernel based analysis

Yinan Yu, Tomas McKelvey, S. Y. Kung

Research output: Contribution to conferencePaperpeer-review

2 Scopus citations

Abstract

A feature extraction technique called Successively Orthogonal Discriminant Analysis (SODA) has been recently proposed to overcome the limitation of Linear Discriminant Analysis (LDA), whose objective is to find a projection vector such that the projected values of data from both classes have maximum class separability. However, in LDA, only one such vector can be found due to the rank deficiency for binary classification problems. On the other hand, as a feature extraction technique, the proposed algorithm SODA attempts to obtain a transformation matrix instead of a vector. In this paper, the kernel version of SODA is presented in both intrinsic space and empirical space. To obtain the solution without sacrificing numerical efficiency, we propose a relaxed formulation and data selection for large scale computations. Simulations are conducted on 5 data sets from UCI database to verify and evaluated the new approach.

Original languageEnglish (US)
Pages72-78
Number of pages7
DOIs
StatePublished - 2013
Event2013 12th International Conference on Machine Learning and Applications, ICMLA 2013 - Miami, FL, United States
Duration: Dec 4 2013Dec 7 2013

Other

Other2013 12th International Conference on Machine Learning and Applications, ICMLA 2013
Country/TerritoryUnited States
CityMiami, FL
Period12/4/1312/7/13

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Human-Computer Interaction

Keywords

  • Discriminant Analysis
  • Feature extraction
  • Kernel
  • SODA
  • big data

Fingerprint

Dive into the research topics of 'Kernel SODA: A feature reduction technique using kernel based analysis'. Together they form a unique fingerprint.

Cite this