Complexity Analysis of Neural Network Using First Category Orthogonal Weight Functions

Article Preview

Abstract:

To describe the performances of a new kind of neural network, the complexity for training neural network using orthogonal weight functions is analysed. The full adders are used as the neurons of the neural networks, and the weight functions are orthogonal functions. We derive the relationships of the iteration time with the number of input dimensions, output dimensions and training patterns. Finally, some simulation examples verified the theoretical results obtained in this paper.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 989-994)

Pages:

4437-4440

Citation:

Online since:

July 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2014 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] Martin T. Hagan,Howard B. Demuth. Neural Network Design [M]. Beijing: Machinery Industry Press, (2006).

Google Scholar

[2] Zhang Daiyuan. New Theories and Methods on Neural Networks[M]. Beijing: Tsinghua University Press, (2006).

Google Scholar

[3] Li Qingyang, Wang Nengchao, Yi Dayi. Numerical Analysis (4th edition) [M]. Wuhan: Huazhong University of Science and Technology Press, (2006).

Google Scholar

[4] Shiow-Shung Yang, Ching-Shiow Tseng. An Orthogonal Neural Network for Function Approximation [J]. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 1996, 26(5): 779-785.

DOI: 10.1109/3477.537319

Google Scholar

[5] Thomas H. Cormen, Charles E. Leiserson. Introduction to Algorithms [M]. Cambridge: The MIT Press, (2009).

Google Scholar