Study on Deep Structure of Extreme Learning Machine (DS-ELM) for Datasets with Noise

Article Preview

Abstract:

Extreme learning machine (ELM), a relatively novel machine learning algorithm for single hidden layer feed-forward neural networks (SLFNs), has been shown competitive performance in simple structure and superior training speed. To improve the effectiveness of ELM for dealing with noisy datasets, a deep structure of ELM, short for DS-ELM, is proposed in this paper. DS-ELM contains three level networks (actually contains three nets ): the first level network is trained by auto-associative neural network (AANN) aim to filter out noise as well as reduce dimension when necessary; the second level network is another AANN net aim to fix the input weights and bias of ELM; and the last level network is ELM. Experiments on four noisy datasets are carried out to examine the new proposed DS-ELM algorithm. And the results show that DS-ELM has higher performance than ELM when dealing with noisy data.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 989-994)

Pages:

3679-3682

Citation:

Online since:

July 2014

Authors:

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2014 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] G.B. Huang, Q.Y. Zhu, and C.K. Siew, in: Proceedings of International Joint Conference, vol. 2, Budapest, Hungary (2004), pp.985-990.

Google Scholar

[2] G.B. Huang, Q.Y. Zhu, and C.K. Siew: Extreme learning machine: theory and applications. Neurocomputing vol. 70(2006), pp.489-501.

DOI: 10.1016/j.neucom.2005.12.126

Google Scholar

[3] G.B. Huang, and L. Chen: Convex incremental extreme learning machine. Neurocomputing vol. 70(2007), pp.3056-3062.

DOI: 10.1016/j.neucom.2007.02.009

Google Scholar

[4] G.B. Huang, and L. Chen: Enhanced random search based incremental extreme learning machine. Neurocomputing vol. 70(2008), pp.3056-3062.

DOI: 10.1016/j.neucom.2007.02.009

Google Scholar

[5] G.B. Huang: Learning capability and storage capacity of two-hidden-layer feedforward networks. Neural Networks Vol. 14(2003), pp.274-281.

DOI: 10.1109/tnn.2003.809401

Google Scholar

[6] D. Serre, in: Theory and Applications, edited by Springer, NY (2002).

Google Scholar

[7] M.A. Kramer: Auto-associative neural networks. Computers in Chemicd Engineering, vol. 16(1992), pp.313-328.

Google Scholar