Parallel Iterative Methods for Nonlinear Programming Problems

Article Preview

Abstract:

In this paper, we present two parallel multiplicative algorithms for convex programming. If the objective function is differentiable and convex on the positive orthant of , and it has compact level sets and has a locally Lipschitz continuous gradient, we prove these algorithms converge to a solution of minimization problem. For the proofs there are essentially used the results of sequential methods shown by Eggermont[1].

You might also be interested in these eBooks

Info:

Periodical:

Pages:

105-110

Citation:

Online since:

December 2010

Authors:

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2011 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Eggermont P P B, Multiplicative iterative algorithms for convex programming[J]. Linear Algebra Appl., 130(1990)25-42.

DOI: 10.1016/0024-3795(90)90204-p

Google Scholar

[2] L. A. Shepp and Y. Vardi, Maximum likelihood reconstruction in emission tomography[J], IEEE Trans. Med. Imaging MI, 1(1982)113-122.

DOI: 10.1109/tmi.1982.4307558

Google Scholar

[3] O. L. Managsarian, Nonlinear Programming[M], McGraw-Hill, New York, (1969).

Google Scholar

[4] Y. Vardi, L. A. Shepp and L. Kaufman, A statistical model for positron essission tomography[J], J. Amer. Statist. Assoc., 80(1985)8-38.

Google Scholar

[5] R. T. Rockafellar, Monotone operators and the proximal point algorithm[J], SIAM J. Control Optim., 14(1976)877-898.

DOI: 10.1137/0314056

Google Scholar

[6] Elsner L, Koltracht I, Neumann M, On the convergence of asynchronous paracontractions with application to tomographic reconstruction from incomplete data[J], Linear Algebra Appl., 130(1990)65-82.

DOI: 10.1016/0024-3795(90)90206-r

Google Scholar