Abstract
Kernel methods are nonparametric feature extraction techniques that attempt to boost the learning capability of machine learning algorithms using nonlinear transformations. However, one major challenge in its basic form is that the computational complexity and the memory requirement do not scale well with respect to the training size. Kernel approximation is commonly employed to resolve this issue. Essentially, kernel approximation is equivalent to learning an approximated subspace in the high-dimensional feature vector space induced and characterized by the kernel function. With streaming data acquisition, approximated subspaces can be constructed adaptively. Explicit feature vectors are then extracted by a transformation onto the approximated subspace and linear learning techniques can be subsequently applied. From a computational point of view, operations in kernel methods can easily be parallelized and modern infrastructures can be utilized to achieve efficient computing. Moreover, the extracted explicit feature vectors can easily be interfaced with other learning techniques.
Original language | English (US) |
---|---|
Title of host publication | Adaptive Learning Methods for Nonlinear System Modeling |
Publisher | Elsevier |
Pages | 127-147 |
Number of pages | 21 |
ISBN (Electronic) | 9780128129760 |
ISBN (Print) | 9780128129777 |
DOIs | |
State | Published - Jan 1 2018 |
All Science Journal Classification (ASJC) codes
- General Engineering
Keywords
- CUDA
- Classification
- GPU
- Kernel approximation
- Nyström
- Spark
- Subspace learning