Namespaces
Variants
Actions

Iteration algorithm

From Encyclopedia of Mathematics
Revision as of 17:18, 7 February 2011 by 127.0.0.1 (talk) (Importing text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

A recursive algorithm realizing a sequence of point-to-set mappings , where is a topological space, that can be used to compute for an initial point a sequence of points by the formulas

(1)

The operation (1) is called an iteration, while the sequence is called an iterative sequence.

Iteration methods (or methods of iterative approximation) are used both for finding a solution to an operator equation

(2)

a minimum of a functional, eigen values and eigen vectors of an equation , etc., as well as for proving the existence of solutions to these problems. An iteration method (1) is called convergent for the initial approximation to a solution of a problem considered if as .

Operators for solving (2), given on a metric linear space , are usually constructed by the formulas

(3)

where is some sequence of operators determined by the type of the iteration method. The contracting-mapping principle and its generalizations, or variational minimization methods for a functional related to the problem, lie at the basis of constructing iteration methods of type (1), (3). Various methods for constructing are used, e.g. variants of the Newton method or method of descent (cf. Descent, method of). One tries to choose the so that a fast convergence is ensured under the conditions that the numerical realization of the operations , for a given amount of computer memory, is sufficiently simple, has as low complexity as possible and is numerically stable. Iteration methods for solving linear problems have been well-developed and were well-studied. The iteration methods are divided here into linear and non-linear ones. The Gauss method, the Seidel method, the successive overrelaxation method (cf. Relaxation method), and iteration methods with Chebyshev parameters belong to the linear methods; variational methods belong to the non-linear methods (e.g. the methods of steepest descent and conjugate gradients, the minimal discrepancy method, etc., cf. Steepest descent, method of; Conjugate gradients, method of). One of the efficient iteration methods is the method using Chebyshev parameters, where is a self-adjoint operator with spectrum on , . This method provides an optimum (for given information on the boundaries of the spectrum) estimate of the convergence at a pre-assigned -th step. The method can be written in the form

where

and is the expected number of iterations, and one uses in it a special permutation of order that mixes well for the stability of the roots of the Chebyshev polynomials. For the permutation can, e.g., be constructed as follows: , and if has already been constructed, then . For it has the form (1, 16, 8, 9, 4, 13, 5, 12, 2, 15, 7, 10, 3, 14, 6, 11).

There exist iteration methods using previous approximations . They are called -step methods and have an increased rate of convergence.

Iteration methods are extensively used in solving multi-dimensional problems in mathematical physics, and for some classes of problems there exist special fast-converging iteration methods. Examples are: the method of variable directions, the methods explained in [7] for elliptic boundary-initial value problems, and some methods for the problem of particle transfer or radiation (cf. [1][7]; Variable-directions method).

References

[1] L.V. Kantorovich, G.P. Akilov, "Functionalanalysis in normierten Räumen" , Akademie Verlag (1964) (Translated from Russian)
[2] L. Collatz, "Funktionalanalysis und numerische Mathematik" , Springer (1964)
[3] G.I. Marchuk, V.I. Lebedev, "Numerical methods in the theory of neutron transport" , Harwood (1986) (Translated from Russian)
[4] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[5] M.A. Krasnosel'skii, G.M. Vainikko, P.P. Zabreiko, et al., "Approximate solution of operator equations" , Wolters-Noordhoff (1972) (Translated from Russian)
[6] V.I. Lebedev, "Optimization in iteration methods" , Proc. 2nd Math. School Optim. Organ. Comp. 1971 , Inst. Kibernet. Akad. Nauk SSSR (1972) pp. 109–135 (In Russian)
[7] R.P. Fedorenko, "Iterative methods for elliptic difference equations" Russian Math. Surveys , 2 (1973) pp. 129–195 Uspekhi Mat. Nauk , 28 (1973) pp. 121–182


Comments

See also the editorial comments to Chebyshev iteration method.

References

[a1] D.M. Young, "Iterative solution of large linear systems" , Acad. Press (1971)
[a2] A. George, J.W.-H. Liu, "Computer solution of large sparse positive definite systems" , Prentice-Hall (1981)
[a3] J.E., jr. Dennis, R. Schnable, "Least change secant updates for quasi-Newton methods" SIAM Review , 21 (1979) pp. 443–459
[a4] J.M. Ortega, W.C. Rheinboldt, "Iterative solution of non-linear equations in several variables" , Acad. Press (1970)
[a5] W.C. Rheinboldt, "Methods for solving systems of nonlinear equations" , SIAM (1970)
[a6] J.F. Traub, "Iterative methods for the solution of equations" , Prentice-Hall (1964)
[a7] R. Wait, "The numerical solution of algebraic equations" , Wiley (1979)
[a8] E. Wasserstrom, "Numerical solution by the continuation method" SIAM Review , 15 (1973) pp. 89–119
[a9] R.S. Varga, "Matrix iterative analysis" , Prentice-Hall (1962)
How to Cite This Entry:
Iteration algorithm. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Iteration_algorithm&oldid=16800
This article was adapted from an original article by V.I. Lebedev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article