A Class of Gradient-Type Methods with Perturbations

Article Preview

Abstract:

In this paper, a new kind of method which are called gradient-type method with perturbations are proposed and a new kind of nonmonotone line search technique is employed. At the same time, global convergence of these methods is proved only in the case where the gradient function is uniformly continuous on an open convex set containing the iteration sequence.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 268-270)

Pages:

904-907

Citation:

Online since:

July 2011

Authors:

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2011 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] D.P. Bertsekas, Nonlinear Programing, Athena Scientific, Belmont, MA, (1995).

Google Scholar

[2] D.P. Bertsekas and J.N. Tsitsiklis, Neuro-Dynamic programming, Athena Scientific, Belmont, MA, (1996).

Google Scholar

[3] A.A. Gaivoronski, Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods. Part 1, Optim. Methods Software, 4(1994), pp.117-134.

DOI: 10.1080/10556789408805582

Google Scholar

[4] L. Grippo, F. Lampariello and S. Lucidi, A nonmonotone line search technique for Newton's method, SIAM J. Numer. Anal. }, 4(1986), pp.707-716.

DOI: 10.1137/0723046

Google Scholar

[5] M. X. Li and C.Y. Wang, Convergence property of gradient-type methods with non-monotone line search in the presence of perturbation, Appl. Math. Comput. }, 174(2006), pp.854-868.

DOI: 10.1016/j.amc.2005.05.030

Google Scholar

[6] Z.J. Shi, J. Shen, Convergence of nonmontone line search method, J. Comput. Appl. Math., 193(2006), pp.397-412.

Google Scholar

[7] Y.J. Wang, N.H. Xiu, Nonlinear Programming Theory and Algorithms, Shanxi Science and Technology Press, (2004).

Google Scholar

[8] Y.X. Yuan, W.Y. Sun, Optimization Theory and Algorithms, Science Press, (1997).

Google Scholar