Condensed Nearest Neighbor Decision Rule Input Weight Sequential Feed-Forward Neural Networks

Article Preview

Abstract:

This paper proposes a new approach is referred to as condensed nearest neighbor decision rule (CNN) input weight sequential feed-forward neural networks (CIW-SFFNS). In this paper, it is firstly shown that the difference of optimization constraints between the extreme learning machine (ELM) and constrained-optimization-based extreme learning machine. For the second time, this paper proposes a method that using CNN to select the hidden-layer weights from example. Moreover, we compare error minimized extreme learning machines (EM-ELM), support vector sequential feed-forward neural networks (SV-SFFNS) and CIW-SFFNS from two aspects:test accuracy and the number of hidden nodes. We present the result of an experimental study on 10 classification sets. The CIW-SFFNS algorithm has a statistically significant improvement in generalization performance than EM-ELM and SV-SFFNS.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

712-717

Citation:

Online since:

July 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2014 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] Huang, G. B., Zhu, Q. Y., &Siew, C. K, Extreme learning machine: theory and applications, Neurocomputing, 70, (2006)489–501.

DOI: 10.1016/j.neucom.2005.12.126

Google Scholar

[2] Huang, G. B., Zhu, Q. Y., &Siew, C. K, Extreme learning machine: A new learning scheme of feedforward neural networks, Proc. IJCNN, 2, (2004) 985-990.

DOI: 10.1109/ijcnn.2004.1380068

Google Scholar

[3] Huang, G. B., Zhou, H. M., Ding, X. J., & Zhang, R, Extreme Learning Machine for Regression and Multiclass Classification, IEEE Transactions on systems, 42, (2012)513-529.

DOI: 10.1109/tsmcb.2011.2168604

Google Scholar

[4] Feng, G., Huang, G. B., Lin, Q., & Gay, R, Error minimized extreme learning machine with growth of hidden nodes and incremental learning, IEEE Transactions on Neural Networks, 20, (2009)1352–1357.

DOI: 10.1109/tnn.2009.2024147

Google Scholar

[5] Romero, E., &Toppo, D, Comparing support vector machines and feedforward neural networks with similar hidden-layer weights, IEEE Transactions on Neural Networks, 18, (2007) 959-963.

DOI: 10.1109/tnn.2007.891656

Google Scholar

[6] Chen, S., Cowan, C. F. N., & Grant, P. M Orthogonal least squares learning algorithm for radial basis function networks, IEEE Transactions on Neural Networks, 2, (1991) 302-309.

DOI: 10.1109/72.80341

Google Scholar

[7] Vincent, P., &Bengio, Y, Kernel matching pursuit , In New methods for model combination and model selection Machine Learning, 48, (2002)165-187.

Google Scholar

[8] Lu Guo, Jing-huaHao, MinLiu. An incremental extreme learning machine for online sequential learning problems, Neurocomputing, 128, (2014) 50-58.

DOI: 10.1016/j.neucom.2013.03.055

Google Scholar

[9] P. E. Hart, The condensed nearest neighbor rule, IEEE Transactions, 14, (1968)515-516.

DOI: 10.1109/tit.1968.1054155

Google Scholar