[1]
G. Cybenko, Approximation by superpositions of a sigmoidal function, Mathematics of Control, Signal and Systems, vol. 2, pp.303-314, (1989).
DOI: 10.1007/bf02551274
Google Scholar
[2]
M. S. K. Hornik and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, vol. 2(5), 359-366, (1989).
DOI: 10.1016/0893-6080(89)90020-8
Google Scholar
[3]
J. R. Koza and J. P. Rice, Genetic generation of both the weights and architecture for a neural network, in Proc. IEEE, IJCNN, Seattle, WA, 1991, vol. 2, pp.397-404.
DOI: 10.1109/ijcnn.1991.155366
Google Scholar
[4]
X. Yao and Y. Liu, A new evolutionary system for evolving artificial neural networks, Transactions on Neural Networks, vol. 8, no. 3, pp.694-713, May (1997).
DOI: 10.1109/72.572107
Google Scholar
[5]
G. Wei, Evolutionary Neural Network Based on New Ant Colony Algorithm, International symposium on computational intelligence and Design IEEE, 2008, pp.318-321.
DOI: 10.1109/iscid.2008.143
Google Scholar
[6]
R. Huang, and S. Tong, Evolving Product Unit Neural Networks with Particle Swarm Optimization, Fifth International Conference on Image and Graphics, IEEE Computer Society, (2009).
DOI: 10.1109/icig.2009.126
Google Scholar
[7]
T. Y. Kwok, D. Y. Yeung, Constructive Algorithms for Structure Learning in feedforward Neural Networks for Regression Problems, IEEE Transactions on Neural Networks, 8 (3), 630-645, May (1997).
DOI: 10.1109/72.572102
Google Scholar
[8]
R. Reed, Pruning algorithms-A Survey, IEEE Transactions on . Neural Networks, vol. 4, pp.740-747, Sept. (1993).
DOI: 10.1109/72.248452
Google Scholar
[9]
T. Ash, Dynamic node creation in backpropagation networks, Connection Science, vol. 1, no. 4, pp.365-375, (1989).
DOI: 10.1080/09540098908915647
Google Scholar
[10]
S. E. Fahlman and C. Lebiere, The cascade correlation learning architecture, Advances in Neural Information Processing System 2, D. S. Touretzky, Ed. CA: Morgan Kaufmann, pp.524-277, (1990).
Google Scholar
[11]
D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Learning internal representations by error propagation, Parallel Distributed Processing, vol. I, D. E. Rumelhart and J. L. McClelland, Eds. Cambridge, MA: MIT Press, 1986, pp.318-362.
DOI: 10.7551/mitpress/4943.003.0042
Google Scholar
[12]
S. E Fahlman, An empirical study of learning speed in backpropagation networks, Carnegie Mellon Univ., Pittsburg, PA, Tech. Rep. CMU-CS-88-162, (1988).
Google Scholar
[13]
M. Riedmiller and H. Braun, A direct adaptive method for faster backpropagation learning: The RPROP Algorithm, Proc. of the IEEE Int. Conf. on Neural Networks, San Francisco, CA, pp.586-591, (1993).
DOI: 10.1109/icnn.1993.298623
Google Scholar
[14]
R. Setiono and L. C. K. Hui, Use of a Quasi-Newton Method in a Feedforward Neural Network Construction Algorithm, IEEE Transactions on Neural Networks, vol. 6, No. 1, January (1995).
DOI: 10.1109/72.363426
Google Scholar
[15]
M. T. Hagan and M. B. Menhaj, Training Feedforward Networks with the Marquardt algorithm, IEEE Transactions on Neural Networks, vol. 5, no. 6, pp.989-993, (1994).
DOI: 10.1109/72.329697
Google Scholar
[16]
R. Setiono and L. C. K. Hui, Use of a Quasi- Newton method in a Feedforward Neural Network Construction Algorithm, IEEE Transactions on Neural Networks, vol. 6, no. 1, pp.273-277, (1995).
DOI: 10.1109/72.363426
Google Scholar
[17]
T. Y. Kwok, D. Y. Yenug, Objective functions for training new hidden units in constructive neural networks, IEEE Transactions on Neural Networks, Vol. 8, No. 5, 1997, pp.1131-1148.
DOI: 10.1109/72.623214
Google Scholar
[18]
L. Prechelt, Investigation of the cascor family of learning algorithms, Neural Networks 10 (5) , pp.885-896, (1997).
DOI: 10.1016/s0893-6080(96)00115-3
Google Scholar
[19]
L. Ma and K. Khorasani, New training strategies for constructive neural networks with application to regression problems, Neurocomputing, vol. 17, pp.589-609, (2004).
DOI: 10.1016/j.neunet.2004.02.002
Google Scholar
[20]
S. K. Sharma and P. Chandra, An adaptive slope basic dynamic node creation algorithm for single hidden layer neural networks, Proc. of the IEEE Int. Conf. CICN 2010, India, 26-28 November 2010, pp.531-539, DOI 10. 1109/CICN. 2010. 38.
DOI: 10.1109/cicn.2010.38
Google Scholar
[21]
Y. Singh and P. Chandra, A class +1 sigmoidal activation functions for FFANNs, J. Econ. Dyn. Control, 28(1), pp.183-187, (2003).
DOI: 10.1016/s0165-1889(02)00157-4
Google Scholar
[22]
P. Chandra, Y. Singh, An activation function adapting training algorithm for sigmoidal feedforward networks, Neurocomputing, 61 ( 2004)pp.429-437.
DOI: 10.1016/j.neucom.2004.04.001
Google Scholar
[23]
T. Trentin, Networks with trainable amplitude of activation functions, Neural Networks 14 (2001), pp.471-493.
DOI: 10.1016/s0893-6080(01)00028-4
Google Scholar