Namespaces
Variants
Actions

Difference between revisions of "Iteration algorithm"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (tex encoded by computer)
m (fix tex)
 
Line 18: Line 18:
  
 
$$ \tag{1 }
 
$$ \tag{1 }
u  ^ {k+} 1 =  A _ {k} u  ^ {k} ,\ \  
+
u  ^ {k+1}  =  A _ {k} u  ^ {k} ,\ \  
 
k = 0 , 1 ,\dots .
 
k = 0 , 1 ,\dots .
 
$$
 
$$
Line 57: Line 57:
  
 
$$  
 
$$  
u  ^ {k+} 1 =  u - \alpha _ {k+} 1 ( A u  ^ {k} - f  ) ,\ \  
+
u  ^ {k+1}  =  u - \alpha _ {k+1} ( A u  ^ {k} - f  ) ,\ \  
 
k = 0 \dots N - 1 ,
 
k = 0 \dots N - 1 ,
 
$$
 
$$
Line 64: Line 64:
  
 
$$  
 
$$  
\alpha _ {k+} 1 =  2
+
\alpha _ {k+1}  =  2
 
\left (
 
\left (
 
M + m - ( M - m ) \  
 
M + m - ( M - m ) \  
Line 70: Line 70:
 
\frac{2 j _ {k} - 1 }{2 N }
 
\frac{2 j _ {k} - 1 }{2 N }
  
\right )  ^ {-} 1
+
\right )  ^ {-1}
 
$$
 
$$
  
Line 79: Line 79:
 
the permutation  $  \kappa _ {N} $
 
the permutation  $  \kappa _ {N} $
 
can, e.g., be constructed as follows:  $  \kappa _ {2} = ( 1 , 2 ) $,  
 
can, e.g., be constructed as follows:  $  \kappa _ {2} = ( 1 , 2 ) $,  
and if  $  \kappa _ {2  ^ {i-} 1 } = ( j _ {1} \dots j _ {2  ^ {i-} 1 } ) $
+
and if  $  \kappa _ {2  ^ {i-1} } = ( j _ {1} \dots j _ {2  ^ {i-1} } ) $
 
has already been constructed, then  $  \kappa _ {2  ^ {i}  } = ( j _ {1} , 2  ^ {i} + 1 - j _ {1} , j _ {2} , 2  ^ {i} + 1- j _ {2} ,\dots ) $.  
 
has already been constructed, then  $  \kappa _ {2  ^ {i}  } = ( j _ {1} , 2  ^ {i} + 1 - j _ {1} , j _ {2} , 2  ^ {i} + 1- j _ {2} ,\dots ) $.  
 
For  $  N = 16 $
 
For  $  N = 16 $
Line 85: Line 85:
  
 
There exist iteration methods using  $  r $
 
There exist iteration methods using  $  r $
previous approximations  $  u  ^ {k} \dots u  ^ {k-} r+ 1 $.  
+
previous approximations  $  u  ^ {k} \dots u  ^ {k- r+ 1} $.  
They are called  $  r $-
+
They are called  $  r $-step methods and have an increased rate of convergence.
step methods and have an increased rate of convergence.
 
  
 
Iteration methods are extensively used in solving multi-dimensional problems in mathematical physics, and for some classes of problems there exist special fast-converging iteration methods. Examples are: the method of variable directions, the methods explained in [[#References|[7]]] for elliptic boundary-initial value problems, and some methods for the problem of particle transfer or radiation (cf. [[#References|[1]]]–[[#References|[7]]]; [[Variable-directions method|Variable-directions method]]).
 
Iteration methods are extensively used in solving multi-dimensional problems in mathematical physics, and for some classes of problems there exist special fast-converging iteration methods. Examples are: the method of variable directions, the methods explained in [[#References|[7]]] for elliptic boundary-initial value problems, and some methods for the problem of particle transfer or radiation (cf. [[#References|[1]]]–[[#References|[7]]]; [[Variable-directions method|Variable-directions method]]).

Latest revision as of 11:46, 17 June 2020


A recursive algorithm realizing a sequence of point-to-set mappings $ A _ {k} : V \rightarrow V $, where $ V $ is a topological space, that can be used to compute for an initial point $ u ^ {0} \in V $ a sequence of points $ u ^ {k} \in V $ by the formulas

$$ \tag{1 } u ^ {k+1} = A _ {k} u ^ {k} ,\ \ k = 0 , 1 ,\dots . $$

The operation (1) is called an iteration, while the sequence $ \{ u ^ {k} \} $ is called an iterative sequence.

Iteration methods (or methods of iterative approximation) are used both for finding a solution to an operator equation

$$ \tag{2 } A u = f , $$

a minimum of a functional, eigen values and eigen vectors of an equation $ A u = \lambda u $, etc., as well as for proving the existence of solutions to these problems. An iteration method (1) is called convergent for the initial approximation $ u ^ {0} $ to a solution $ u $ of a problem considered if $ u ^ {k} \rightarrow u $ as $ k \rightarrow \infty $.

Operators $ A _ {k} $ for solving (2), given on a metric linear space $ V $, are usually constructed by the formulas

$$ \tag{3 } A _ {k} u ^ {k} = u ^ {k} H _ {k} ( A u ^ {k} - f ) , $$

where $ \{ {H _ {k} } : {V \rightarrow V } \} $ is some sequence of operators determined by the type of the iteration method. The contracting-mapping principle and its generalizations, or variational minimization methods for a functional related to the problem, lie at the basis of constructing iteration methods of type (1), (3). Various methods for constructing $ A _ {k} $ are used, e.g. variants of the Newton method or method of descent (cf. Descent, method of). One tries to choose the $ H _ {k} $ so that a fast convergence $ u ^ {k} \rightarrow u $ is ensured under the conditions that the numerical realization of the operations $ A _ {k} u ^ {k} $, for a given amount of computer memory, is sufficiently simple, has as low complexity as possible and is numerically stable. Iteration methods for solving linear problems have been well-developed and were well-studied. The iteration methods are divided here into linear and non-linear ones. The Gauss method, the Seidel method, the successive overrelaxation method (cf. Relaxation method), and iteration methods with Chebyshev parameters belong to the linear methods; variational methods belong to the non-linear methods (e.g. the methods of steepest descent and conjugate gradients, the minimal discrepancy method, etc., cf. Steepest descent, method of; Conjugate gradients, method of). One of the efficient iteration methods is the method using Chebyshev parameters, where $ A $ is a self-adjoint operator with spectrum on $ [ m , M ] $, $ M > m > 0 $. This method provides an optimum (for given information on the boundaries of the spectrum) estimate of the convergence at a pre-assigned $ N $- th step. The method can be written in the form

$$ u ^ {k+1} = u - \alpha _ {k+1} ( A u ^ {k} - f ) ,\ \ k = 0 \dots N - 1 , $$

where

$$ \alpha _ {k+1} = 2 \left ( M + m - ( M - m ) \ \cos \frac{2 j _ {k} - 1 }{2 N } \right ) ^ {-1} $$

and $ N $ is the expected number of iterations, and one uses in it a special permutation $ \kappa _ {N} = ( j _ {1} \dots j _ {N} ) $ of order $ N $ that mixes well for the stability of the roots of the Chebyshev polynomials. For $ N = 2 ^ {n} $ the permutation $ \kappa _ {N} $ can, e.g., be constructed as follows: $ \kappa _ {2} = ( 1 , 2 ) $, and if $ \kappa _ {2 ^ {i-1} } = ( j _ {1} \dots j _ {2 ^ {i-1} } ) $ has already been constructed, then $ \kappa _ {2 ^ {i} } = ( j _ {1} , 2 ^ {i} + 1 - j _ {1} , j _ {2} , 2 ^ {i} + 1- j _ {2} ,\dots ) $. For $ N = 16 $ it has the form (1, 16, 8, 9, 4, 13, 5, 12, 2, 15, 7, 10, 3, 14, 6, 11).

There exist iteration methods using $ r $ previous approximations $ u ^ {k} \dots u ^ {k- r+ 1} $. They are called $ r $-step methods and have an increased rate of convergence.

Iteration methods are extensively used in solving multi-dimensional problems in mathematical physics, and for some classes of problems there exist special fast-converging iteration methods. Examples are: the method of variable directions, the methods explained in [7] for elliptic boundary-initial value problems, and some methods for the problem of particle transfer or radiation (cf. [1][7]; Variable-directions method).

References

[1] L.V. Kantorovich, G.P. Akilov, "Functionalanalysis in normierten Räumen" , Akademie Verlag (1964) (Translated from Russian)
[2] L. Collatz, "Funktionalanalysis und numerische Mathematik" , Springer (1964)
[3] G.I. Marchuk, V.I. Lebedev, "Numerical methods in the theory of neutron transport" , Harwood (1986) (Translated from Russian)
[4] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[5] M.A. Krasnosel'skii, G.M. Vainikko, P.P. Zabreiko, et al., "Approximate solution of operator equations" , Wolters-Noordhoff (1972) (Translated from Russian)
[6] V.I. Lebedev, "Optimization in iteration methods" , Proc. 2nd Math. School Optim. Organ. Comp. 1971 , Inst. Kibernet. Akad. Nauk SSSR (1972) pp. 109–135 (In Russian)
[7] R.P. Fedorenko, "Iterative methods for elliptic difference equations" Russian Math. Surveys , 2 (1973) pp. 129–195 Uspekhi Mat. Nauk , 28 (1973) pp. 121–182

Comments

See also the editorial comments to Chebyshev iteration method.

References

[a1] D.M. Young, "Iterative solution of large linear systems" , Acad. Press (1971)
[a2] A. George, J.W.-H. Liu, "Computer solution of large sparse positive definite systems" , Prentice-Hall (1981)
[a3] J.E., jr. Dennis, R. Schnable, "Least change secant updates for quasi-Newton methods" SIAM Review , 21 (1979) pp. 443–459
[a4] J.M. Ortega, W.C. Rheinboldt, "Iterative solution of non-linear equations in several variables" , Acad. Press (1970)
[a5] W.C. Rheinboldt, "Methods for solving systems of nonlinear equations" , SIAM (1970)
[a6] J.F. Traub, "Iterative methods for the solution of equations" , Prentice-Hall (1964)
[a7] R. Wait, "The numerical solution of algebraic equations" , Wiley (1979)
[a8] E. Wasserstrom, "Numerical solution by the continuation method" SIAM Review , 15 (1973) pp. 89–119
[a9] R.S. Varga, "Matrix iterative analysis" , Prentice-Hall (1962)
How to Cite This Entry:
Iteration algorithm. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Iteration_algorithm&oldid=47449
This article was adapted from an original article by V.I. Lebedev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article