DocumentCode :
2207255
Title :
Subset kernel principal component analysis
Author :
Washizawa, Yoshikazu
Author_Institution :
Brain Sci. Inst., RIKEN, Wako, Japan
fYear :
2009
fDate :
1-4 Sept. 2009
Firstpage :
1
Lastpage :
6
Abstract :
Kernel principal component analysis (kernel PCA or KPCA) has been used widely for non-linear feature extraction, dimensionally reduction, and classification problems. However, KPCA is known to have high computational complexity, that is the eigenvalue decomposition of which size equals to the number of samples n. Moreover, in order to calculate projection of vector onto the subspace obtained by KPCA, we have to store all n samples and evaluate the kernel function n times. In order to overcome these problems, we propose subset KPCA that minimizes a residual error for all samples using limited number of them, and we provide its solution. Experimental results using synthetic and real data show that the proposed method gives almost the same result as KPCA even if the size of the problem is one-tenth of KPCA.
Keywords :
computational complexity; data reduction; eigenvalues and eigenfunctions; feature extraction; minimisation; pattern classification; principal component analysis; support vector machines; KPCA; computational complexity; dimensionally reduction; eigenvalue decomposition; nonlinear feature extraction; pattern classification; residual error minimisation; subset kernel principal component analysis; support vector machine; Computational complexity; Eigenvalues and eigenfunctions; Feature extraction; Hilbert space; Kernel; Noise reduction; Principal component analysis; Support vector machine classification; Support vector machines; Visualization;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Machine Learning for Signal Processing, 2009. MLSP 2009. IEEE International Workshop on
Conference_Location :
Grenoble
Print_ISBN :
978-1-4244-4947-7
Electronic_ISBN :
978-1-4244-4948-4
Type :
conf
DOI :
10.1109/MLSP.2009.5306221
Filename :
5306221
Link To Document :
بازگشت