Namespaces
Variants
Actions

Iteration methods

From Encyclopedia of Mathematics
Revision as of 17:11, 7 February 2011 by 127.0.0.1 (talk) (Importing text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

for a matrix eigen value problem

Methods for finding the eigen values and eigen vectors (or a principal basis) of a matrix, omitting the preliminary calculation of characteristic polynomials. These methods are substantially different for problems of average size, in which the matrices may entirely be stored in a computer memory, and for problems of high order, in which the information is usually stored in compact form.

The first iteration method was proposed by C.G.J. Jacobi [1] for the computation of the eigen values and eigen vectors of real symmetric matrices (cf. Rotation method). This method can be generalized to complex Hermitian matrices, and also to the larger class of normal matrices.

There is a number of generalizations of Jacobi's method to matrices of arbitrary form. The typical algorithm of this class consists of a sequence of elementary steps, performed according to the scheme

The role of the similarity transformation with (elementary) matrix consists here in reducing the Euclidean norm of the current matrix , i.e. is nearer to being normal than . For one usually takes a rotation matrix or its unitary analogue. The aim of the similarity transformation with this matrix is, as in Jacobi's classical method, in annihilating off-diagonal entries in a Hermitian matrix related with , e.g. in the matrix . The matrices converge, as increases, to a matrix of diagonal or quasi-diagonal form, and the accumulated products of and give a matrix in which the columns are the approximate eigen vectors, or are base vectors of invariant subspaces of these matrix (cf. [2], [3]).

Along with the methods described above, algorithms of another class, so called power methods, have been developed. The most effective method in this direction, and the one that is most often used for solving problems of average size, is the QR-algorithm (cf. [4], ). The iterations of the QR-algorithm are performed according to the following scheme:

(1)

In it, is an orthogonal or unitary, and is a right-triangular matrix. In the transition from to one first finds the orthogonal-triangular decomposition of , after which and are multiplied in reverse order. If and , then (1) implies

(2)

Thus, the QR-algorithm generates a sequence of matrices that are orthogonally similar to the initial matrix ; moreover, the transforming matrix is the orthogonal component in the decomposition (2) of .

When iterating by the scheme (1), the matrices converge to a right-triangular or quasi-triangular matrix, and the rate of convergence to zero of the subdiagonal entries is determined by the quotients between the absolute values of various eigen values and is, generally speaking, quite slow. To improve the convergence of the QR-algorithm one uses so-called shifts, which leads to the following variant of (1):

(3)

Usually, by the application of shifts (e.g. ) one obtains a faster convergence to zero of the off-diagonal entries of the last row (asymptotically quadratic in the general case and cubic for Hermitian matrices), with corresponding fast stabilization of the diagonal entry . After the value of this entry has been established, further iteration is performed with the principal submatrix of order , etc. The eigen vectors of the resulting triangular matrix give, after multiplication by the accumulated product of the orthogonal matrices , the eigen vectors of the initial matrix .

Iteration by (1) or (3) is applied to matrices that have previously been reduced to the so-called Hessenberg form. One says that a matrix is in right Hessenberg form if for . The QR-algorithm preserves the Hessenberg form, which generally reduces the cost of each iteration. There are other important possibilities for reducing the amount of calculation, e.g., by implicit use of shifts, which allow one to find complex conjugate eigen values of real matrices without going into complex arithmetic.

In problems of high order (from hundreds to thousands), the matrices are usually sparse, i.e. have a relatively few number of non-zero entries. Moreover, it is usually required to compute not all but only a few eigen values and corresponding eigen vectors. In a typical case the eigen values required are the largest or the smallest in absolute value.

The methods described above, based on similarity transformations, destroy the sparseness of the matrix and are thus not recommended. Methods in which the elementary transformation is multiplication of a matrix by a vector are fundamental to methods of high order. In their further description it is supposed in what follows that the eigen values of the matrix are enumerated in order of decreasing absolute value:

The power method for determining an eigen value of maximal absolute value has widest domain of applicability. Starting from an initial approximation one constructs a sequence of normalized vectors

(4)

This sequence converges to an eigen vector corresponding to if: 1) all elementary divisors of related to are linear; 2) there are no other eigen values of the same absolute value; and 3) in the decomposition of with respect to a principal basis of the component in the eigen space corresponding to is non-trivial. However, the convergence of a power method is, as a rule, slow, and is determined by .

If an approximation to the desired eigen value is known, then one obtains a faster convergence by the method of inverse iteration. Instead of (4), one constructs a sequence determined by

(5)

The method of inverse iteration is, basically, a power method for , which has the strongly-dominating eigen value . However, the realization of (5) requires the solution to a linear system with matrix , and even when using special methods for sparse system this increases the demands on the computer memory in comparison to the power method.

So-called methods of simultaneous iteration are used in order to calculate groups of eigen values. They are generalizations of the power method, and instead of iterating one vector, one actually constructs iterations under of an entire subspace. Stewart's method is a typical representative of this group of methods [6]. Suppose that the eigen values of satisfy

One chooses an initial -matrix with orthogonal columns. One then constructs a sequence of matrices , starting from , by

(6)

In the second formula, is a right-triangular -matrix, and has orthonormal columns. The aim of this decomposition, which need not be computed at every iteration, is that the linear independence of the columns of , which may in a practical sense be destroyed by multiple iteration under , is preserved. The third formula in (6) plays an important role in accelerating the convergence of the method. The orthogonal -matrix participating in this formula has the following meaning. If denotes the -matrix , then reduces to Schur form, i.e. to a right-triangular matrix. The matrix can be constructed using the QR-algorithm, its columns form a Schur basis of , characterized by the fact that for each , , the linear span of the first vectors forms an invariant subspace of . The matrices of Stewart's method converge to a matrix whose columns form a Schur basis for the invariant subspace of corresponding to . Here, the convergence of the -th column is determined by the quotient .

In the case of a real symmetric matrix, additional possibilities arise. They are related to the treatment of eigen values as stationary points of the Rayleigh functional

(7)

Methods for unconstrained optimization of (7) can be used to determine extreme points of the spectrum. The theory thus obtained parallels that of the iteration methods for solving positive-definite linear systems, and leads to the same algorithms: coordinate relaxation, sequential overrelaxation, steepest descent, and conjugate gradients (cf. [7]).

The methods listed find the eigen values "one-after-one" . Lanczos' method is used to determine simultaneously groups of eigen values for a symmetric matrix [8]; the method was proposed for the tri-diagonalization of a symmetric matrix of order . The following observations lie at the foundation of the method: If a sequence of vectors is linearly independent, then the matrix can be reduced to tri-diagonal form in the basis obtained by orthonormalization of this sequence. The vectors are constructed by three-term recurrence formulas:

in which the coefficients , , , determine . After steps in the orthogonalization process, the vectors as well as the principal submatrix of are known. In many cases, already for part of the eigen values of are sufficiently good approximations to some eigen values of . The corresponding eigen vectors of can be used to construct approximate eigen vectors to . If the required accuracy has not yet been achieved, one may choose another initial approximation in the linear span of and repeat the process, making steps, etc. Lanczos' method in an iterative treatment consists of this (cf. [9]).

The search for interior points of the spectrum of a sparse matrix of high order requires the ability to invert matrices of the form for a sequence , [10].

References

[1] C.G.J. Jacobi, "Ueber ein leichtes Verfahren, die in der Theorie der Säcularstörungen vorkommenden Gleichungen numerisch auflösen" J. Reine Angew. Math. , 30 (1846) pp. 51–94
[2] P.J. Eberlein, "A Jacobi-like method for the automatic computation of eigenvalues and eigenvectors of an arbitrary matrix" J. Soc. Industr. Appl. Math. , 101 : 1 (1962) pp. 74–88
[3] V.V. Voevodin, "The solution of the complete eigenvalue problem by a generalized rotation method" Zh. Vychish. Met. i Programmirov. , 3 (1965) pp. 89–105 (In Russian)
[4] V.N. Kublanovskaya, "On some algorithms for the solution of the complete eigenvalue problem" USSR Comp. Math. Math. Phys. , 3 (1961) pp. 637–657 Zh. Vychisl. Mat. i Mat. Fiz. , 1 : 4 (1961) pp. 555–570
[5a] J.G.F. Francis, "The QR transformation: A unitary analogue to the LR transformation I" Comput. J. , 4 : 3 (1961) pp. 265–271
[5b] J.G.F. Francis, "The QR transformation: A unitary analogue to the LR transformation II" Comput. J. , 4 : 4 (1962) pp. 332–345
[6] G.W. Stewart, "Simultaneous iteration for computing invariant subspaces of non-Hermitian matrices" Numer. Math. , 25 : 2 (1976) pp. 123–136
[7] A. Ruhe, "Computation of eigenvalues and eigenvectors" , Springer (1977)
[8] C. Lanczos, "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators" J. Res. Nat. Bur. Standards , 45 : 4 (1950) pp. 255–282
[9] C.C. Paige, "Computational variants of the Lanczos method for the eigenproblem" J. Inst. Math. Appl. , 10 (1972) pp. 373–381
[10] T. Ericsson, A. Ruhe, "The spectral transformation Lánczos method for the numerical solution of large sparse generalized symmetric eigenvalue problems" Math. Comp. , 35 (1980) pp. 1251–1268


Comments

The Hessenberg form is not only preferable for its sparse structure, but also to guarantee convergence of the QR-algorithm. Moreover, by judicious choice of the shifts the convergence of the diagonal entries to the eigen values starts in principle at the last position, thus making it possible to proceed on a deflated matrix.

The Lanczos algorithm nowadays is one of the most powerful processes to compute eigen values of a large sparse matrix (not only "the interior points" ). The interesting aspect is that the computed vectors by no means form an orthogonal basis, rather they are steps in an (infinite) iteration process. Sometimes convergence to an eigen value temporarily halts; later on it is resumed. This is due to a (numerically) multiple eigen value. For more information see [a1].

The standard reference to numerical linear algebra is the text book [a7], which provides an excellent introduction to the methods surveyed in the article above. A more detailed treatment of the matrix eigen problem is given in [a8], [a2] and [a3]. Application of the various variants of Jacobi's method may be found in [a4] and [a5], and aspects of the QR-algorithm are discussed in

and [a10]. Additional references on the Lanczos method are [a9], [a11]. Finally, matrix eigen system Fortran routines are collected in [a6].

See also Complete problem of eigen values; Partial problem of eigen values

References

[a1] J.K. Cullum, R.A. Willoughby, "Lanczos algorithms for large symmetric eigenvalue computations" , Birkhäuser (1985)
[a2] B.N. Parlett, "The symmetric eigenvalue problem" , Prentice-Hall (1980)
[a3] J.H. Wilkinson, "The algebraic eigenvalue problem" , Clarendon Press (1965)
[a4] P.J. Eberlein, "Solution of the complex eigenproblem by a norm-reducing Jacobi-type method" Numer. Math. , 14 (1970) pp. 232–245
[a5] G.E. Forsythe, P. Henrici, "The cyclic Jacobi method for computing the principal values of a complex matrix" Trans. Amer. Math. Soc. , 94 (1960) pp. 1–23
[a6] B.S. Garbow, J.M. Doyle, J.J. Dongara, C.B. Moler, "Matrix eigensystem routines: EISPACK guide extensions" , Springer (1972)
[a7] G.H. Golub, C.F. van Loan, "An analysis of the total least squares problem" SIAM J. Numer. Math. , 17 (1980) pp. 883–893
[a8] A.R. Gourlay, G.A. Watson, "Computational methods for matrix eigenproblems" , Wiley (1977)
[a9] C.C. Paige, "Computational invariants of the Lanczos method for the eigenproblem" J. Inst. Math. Appl. , 10 (1972) pp. 373–381
[a10] B.N. Parlett, "Convergence of the QR algorithm" Numer. Math. , 7 (1965) pp. 187–193 (Correction in (1967), 163–164)
[a11] B.N. Parlett, D.S. Scott, "The Lanczos algorithm with selective orthogonalization" Math. Comp. , 33 (1979) pp. 217–238
How to Cite This Entry:
Iteration methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Iteration_methods&oldid=15109
This article was adapted from an original article by Kh.D. Ikramov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article