Analysis of Generalization for a New Kind of Neural Network

Article Preview

Abstract:

To describe the performance of a new kind of approximation algorithm using B-Spline weight functions on feedforward neural networks, the analysis of generalization is proposed in this paper. The neural networks architecture is very simple and the number of weight functions is independent of the number of patterns. Three important theorems are proved, which mean that, by increasing the density of the knots, the upper bound of the networks error can be made to approach zero. The results show that the new algorithm has good property of generalization and high learning speed.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 760-762)

Pages:

2023-2027

Citation:

Online since:

September 2013

Authors:

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] D. Y. Zhang: New Theories and Methods on Neural Networks (Tsinghua University Press, Beijing 2006).

Google Scholar

[2] D. Y. Zhang: New Algorithm for Training Feedforward Neural Networks with Cubic Spline Weight Functions, Jounal of Systhms Engineering and Electronics, Vol. 28 (2006), p.1434.

Google Scholar

[3] D. Y. Zhang: New Algorithm for Training Neural Networks Based on Generalized Чебышев Polynomials, Jounal of Systhms Engineering and Electronics, Vol. 30 (2008), p.2274.

Google Scholar

[4] D. Y. Zhang: A New Algorithm of Neural Networks with B-Spline Weight Functions, In: Proc. 2010 International Conference on Artifical Intelligence and Education / IEEE Press, Hangzhou (2010), p.782.

Google Scholar

[5] D. Y. Zhang: An Improved Algorithm Using B-Spline Weight Functions for Training Feedforward Neural Networks, Applied Mechanics and Materials, Vol. 278-280 (2013), p.1301.

DOI: 10.4028/www.scientific.net/amm.278-280.1301

Google Scholar