A New Type of Hybrid Learning Algorithm for Three-Layered Feed-Forward Neural Networks

Article Preview

Abstract:

The problem of local minimum cannot be avoided when it comes to nonlinear optimization in the learning algorithm of neural network parameters, and the larger the optimization space is, the more obvious the problem becomes. This paper proposes a new type of hybrid learning algorithm for three-layered feed-forward neural networks. This algorithm is based on three-layered feed-forward neural networks with output layer function, namely linear function, combining a quasi Newton algorithm with adaptive decoupled step and momentum (QNADSM) and iterative least square method to export. Simulation proves that this hybrid algorithm has strong self-adaptive capability, small calculation amount and fast convergence speed. It is an effective engineer practical algorithm.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 1030-1032)

Pages:

1627-1632

Citation:

Online since:

September 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2014 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] Moumen;T. El-Melegy; Senior Member; 2013 inIEEE TRANSACTIONS ON NEURAL NETWORKS ANDLEARNIN SYSTEMS, Ranom Sampler M-Estimator Algorithm With Sequential Probability Ratio Test for Robust Function Approximation via Feed-Forward Neural Networks, 24(7): p.1074.

DOI: 10.1109/tnnls.2013.2251001

Google Scholar

[2] Dongsheng Guo; Yunong Zhang Sun; 2012 in IEEE Computational Intelligence Magazine, Novel Recurrent Neural Network for Time-Varying ProblemsSolving. 7(4); p.61~ 65.

DOI: 10.1109/mci.2012.2215139

Google Scholar

[3] Émilie Chouzenoux; Saïd Moussaoui; Jérôme Idier; François Mariette; 2010 inIEEE TRANSACTIONSON SIGNAL PROCESSING, Efficient Maximum Entropy Reconstruction of Nuclear Magnetic Resonance T1-T2Spectra, 58(12): p.6040~6051.

DOI: 10.1109/tsp.2010.2071870

Google Scholar

[4] Hojin Kim; Tae-Suk Suh; Rena Lee; Lei Xi- ng1; Ruijiang Li; 2012 inPhysics in Medicineand Biology, Efficient IMRT inverse planning with a new L1-solver: template for first-order conic solver, 57(13): pp.4139-4153.

DOI: 10.1088/0031-9155/57/13/4139

Google Scholar

[5] Epanomeritakis; V Akc¸elik; O Ghattas; J Bielak; 2008 inInverse Problems, A Newton-CG method for large-scale three-dimensional elastic full-waveform seismic inversion, 24(3): p.1~25.

DOI: 10.1088/0266-5611/24/3/034015

Google Scholar

[6] Wei-Tao Zhang; Shun-Tian Lou; Da-Zheng Feng; 2014 inIEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, Adaptive Quasi-Newton Algorithm for Source Extraction via CCA Approach, 25(4): p.677~689.

DOI: 10.1109/tnnls.2013.2280285

Google Scholar

[7] MdZulfiquarAliBhotto; Andreas Antoniou; 2010 inIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSI: REGULAR PAPERS, ImprovedQuasi-NewtonAdaptive-Filtering Algorithm, 58(7): p.2109~2119.

Google Scholar

[8] Md. Zulfiquar Ali Bhotto; Andreas Antoni- ou; Antoniou; 2011 inIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: EXPRESS BRIEFS, Robust Quasi-Newton Adaptive Filtering Algorithms, 58(8): p.537~541.

DOI: 10.1109/tcsii.2011.2158722

Google Scholar

[9] Shan Ouyang ; Tan Lee P.C. Ching; 2007 inDigital Signal Processing: A Review Journal, A poweer-based adaptive method for eigenanalysis without square-root operations, 17(1): pp.209-224.

DOI: 10.1016/j.dsp.2006.02.003

Google Scholar

[10] Huai-Ning Wu; Biao Luo; 2012 inIEEE TRANSSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, Neural Network Based Online Simultaneous Policy Update Algorithm for Solving the HJI Equation in Nonlinear H∞ Control, 23(12): p.1884~1895.

DOI: 10.1109/tnnls.2012.2217349

Google Scholar

[11] Huanhuan Chen; Xin Yao; 2009 inIEEE TRANSACTIO- NS ON NEURAL NETWORKS, Regularized Negative Correlation Learning for Neural Ne- twork Ensembles, 20(12): p.1962~(1978).

DOI: 10.1109/tnn.2009.2034144

Google Scholar

[12] Kun Qiu; AleksandarDogandzic; 2010 inIEEE TRANSACTIONS ON SIGNALPROCESSING, Variance-Component Based Sparse Signal Reconstruction and Model Selection, 58(6): p.2935~2951.

DOI: 10.1109/tsp.2010.2044828

Google Scholar

[13] K. Z. Mao; 2004 inIEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, Feature Subset Selection for Support Vector Machines Through Discriminative Function Pruning Analysis, 34(1): p.60~67.

DOI: 10.1109/tsmcb.2002.805808

Google Scholar

[14] Puyin Liu; Hongxing Li; 2004 inIEEE TRANSACTIONS ON NEURALNETWORKS, Efficient Learning Algorithms for Three-Layer Regular Feedforward Fuzzy Neural Networks, 15(3): p.545~558.

DOI: 10.1109/tnn.2004.824250

Google Scholar

[15] Chun-Lu Zhang; 2005 inInternational Journal of Refrigeration, Generalized correlation of refrigerant mass flow rate through adiabatic capillary tubes using artificial neural networks, 28(4): p.506~514.

DOI: 10.1016/j.ijrefrig.2004.11.004

Google Scholar

[16] Branko Soucek; IRIS Group. 1992 Fast Learning and Invariant Object Recognition. London: John Wiley & Sons, Inc.

Google Scholar