Why Can SVM Be Performed in PCA Transformed Space for Classification?


Article Preview

PCA plus SVM is a popular framework for classification problems. On the one hand, it can avoid the SVM kernel Gram to become overlarge and save storage space. On the other hand, PCA plus SVM approach has been verified effective for classification by experience. This paper focuses on building a theoretical foundation for this framework. By derivation, the equivalence relation between PCA plus SVM and LDA is discovered for binary classification. Moreover, we give a specific analysis about the framework to validate our viewpoint compared with SVM and combined SVM/LDA on ORL face database, Yale face database and real world benchmark data. The experimental results indicate that SVM can be performed in PCA transformed space for classification.



Advanced Materials Research (Volumes 181-182)

Edited by:

Qi Luo and Yuanzhi Wang




J. Zhang et al., "Why Can SVM Be Performed in PCA Transformed Space for Classification?", Advanced Materials Research, Vols. 181-182, pp. 1031-1037, 2011

Online since:

January 2011




[1] V. Vapnik, The Nature of Statistical Learning Theory[M]. New York: Springer Verlag, (1995).

[2] M. Kirby and L. Sirovich, Application of the Karhunen-Lo`eve Procedure for the Characterization of Human Faces[J]. IEEE trans. Pattern Anal. Mach. Intell, Vol. 12. No. 1, 103-108., (1990).

DOI: https://doi.org/10.1109/34.41390

[3] K. Fukunaga, Introduction to Statistical Pattern Classification[M]. Academic Press, San Diego, California, USA, (1990).

[4] Jian Yang, Jing-yu Yang, Why can LDA be performed in PCA transformed space?[J], Pattern Recognition, 563-566, (2003).

DOI: https://doi.org/10.1016/s0031-3203(02)00048-1

[5] S. Mika, G. R¨atsch, J Weston, B. Sch¨olkopf, A. Smola, and K. - R. M¨uller, Constructing Descriptive and Discriminative Nonlinear Features: Rayleigh Coefficients in Kernel Feature Spaces[J], IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp.623-628, (2003).

DOI: https://doi.org/10.1109/tpami.2003.1195996

[6] A.N. Tikhonov and V.Y. Arsenin, Solution of Ill-Posed Problems[M]. New York: Wiley, (1997).

[7] S. Mika, G. R¨atsch, J. Weston, B. Sch¨olkopf, and K. -R. M¨u ller, Fisher Discriminant Analysis with Kernels[J], Proc. IEEE Intl Workshop Neural Networks for Signal Processing IX, pp.41-48, (1999).

DOI: https://doi.org/10.1109/nnsp.1999.788121

[8] G. Baudat and F. Anouar, Generalized Discriminant Analysis Using a Kernel Approach[J], Neural Computation, vol. 12, no. 10, pp.2385-2404, (2000).

DOI: https://doi.org/10.1162/089976600300014980

[9] P.N. Belhumeur, J.P. Hespanha, and D.J. Kriengman, Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection[J], IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp.711-720, (1997).

DOI: https://doi.org/10.1109/34.598228

[10] Tao Xiong and Vladimir Cherkassky, A Combined SVM and LDA Approach for Classification[C], Proceedings of International Joint Conference on Neural Networks, Montreal, Canada, (2005).

DOI: https://doi.org/10.1109/ijcnn.2005.1556089

[11] Yu cunmei, Pan quan, Cheng yongmei and Zhang Hongcai, Kernel form of regularized FDA and comparison study with SVM[J], Application Reserch of Computers(China), Vol. 27 No. 3, 897-898, (2010).

[12] C. L. Blake and C. J. Merz, UCI repository of machine leaming databases, (1998).