Feature Extraction for Kernel Minimum Squared Error by Sparsity Shrinkage

Article Preview

Abstract:

In this paper, a sparsity based model is proposed for feature selection in kernel minimum squared error (KMSE). By imposing a sparsity shrinkage term, we formulate the procedure of subset selection as an optimization problem. With the chosen small portion of training examples, the computational burden of feature extraction is largely alleviated. Experimental results conducted on several benchmark datasets indicate the effectivity and efficiency of our method.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

450-453

Citation:

Online since:

April 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2014 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] K.R. Muller, S. Mika, G. Ratsch, K. Tsuda, B. Scholkopf: An introduction to kernel-based learning algorithms. Neural Networks, IEEE Transactions on 12(2), 181–201 (2001).

DOI: 10.1109/72.914517

Google Scholar

[2] C. Cortes, V. Vapnik: Support-vector networks. Machine learning 20(3), 273–297 (1995).

DOI: 10.1007/bf00994018

Google Scholar

[3] J. Xu, X. Zhang, Y. Li: Kernel mse algorithm: a unified framework for kfd, ls-svm and krr. In: Neural Networks, 2001. Proceedings. IJCNN'01. International Joint Conference on. vol. 2, p.1486–1491. IEEE (2001).

DOI: 10.1109/ijcnn.2001.939584

Google Scholar

[4] Y. Xu, D. Zhang, Z. Jin, M. Li, J.Y. Yang: A fast kernel-based nonlinear discriminant analysis for multi-class problems. Pattern Recognition 39(6), 1026–1033(2006).

DOI: 10.1016/j.patcog.2005.10.029

Google Scholar

[5] Q. Zhu, Y. Xu, J. Cui, C.F. Chen, J. Wang, X. Wu, and Y. Zhao: A method for constructing simplified kernel model based on kernel-mse. In: Computational Intelligence and Industrial Applications, 2009. PACIIA 2009. Asia-Pacific Conference on. vol. 1, p.237–240. IEEE (2009).

DOI: 10.1109/paciia.2009.5406447

Google Scholar

[6] Q. Zhu: Reformative nonlinear feature extraction using kernel mse. Neurocomputing 73(16), 3334–3337 (2010).

DOI: 10.1016/j.neucom.2010.04.007

Google Scholar

[7] J. Wang, P. Wang, Q. Li, J. You: Improvement of the kernel minimum squared error model for fast feature extraction. Neural Computing and Applications p.1–7.

DOI: 10.1007/s00521-012-0813-9

Google Scholar

[8] D.L. Donoho: For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution. (English summary). Comm. Pure Appl. Math 59(6), 797–829 (2006).

DOI: 10.1002/cpa.20132

Google Scholar

[9] M. Elad: Sparse and redundant representations: from theory to applications in signal and image processing. Springer (2010).

DOI: 10.1007/978-1-4419-7011-4

Google Scholar

[10] B. Efron, T. Hastie, I. Johnstone, R. Tibshirani: Least angle regression. The Annals of statistics 32(2), 407–499 (2004).

DOI: 10.1214/009053604000000067

Google Scholar

[11] Y.P. Zhao, Z.H. Du, Z.A. Zhang, H.B. Zhang: A fast method of feature extraction for kernel mse. Neurocomputing 74(10), 1654–1663 (2011).

DOI: 10.1016/j.neucom.2011.01.020

Google Scholar