Namespaces
Variants
Actions

Difference between revisions of "Rotation method"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (tex encoded by computer)
(latex details)
 
(2 intermediate revisions by one other user not shown)
Line 22: Line 22:
 
and  $  U _ {k} $
 
and  $  U _ {k} $
 
is the matrix of the planar rotation which annihilates the off-diagonal entry of maximum modulus of the matrix  $  A _ {k-} 1 $.  
 
is the matrix of the planar rotation which annihilates the off-diagonal entry of maximum modulus of the matrix  $  A _ {k-} 1 $.  
Here, if  $  A _ {k} = \| a _ {ij}  ^ {(} k) \| $,  
+
Here, if  $  A _ {k} = \| a _ {ij}  ^ {(k)} \| $,  
 
$  | a _ {pq} ^ {( k - 1) } | = \max _ {i \neq j }  | a _ {ij} ^ {( k - 1) } | $,  
 
$  | a _ {pq} ^ {( k - 1) } | = \max _ {i \neq j }  | a _ {ij} ^ {( k - 1) } | $,  
the matrix  $  U _ {k} = \| u _ {ij}  ^ {(} k) \| $
+
the matrix  $  U _ {k} = \| u _ {ij}  ^ {(k)} \| $
differs from the identity matrix only by the entries  $  u _ {pp}  ^ {(} k) $,  
+
differs from the identity matrix only by the entries  $  u _ {pp}  ^ {(k)} $,  
$  u _ {qq}  ^ {(} k) $,  
+
$  u _ {qq}  ^ {(k)} $,  
$  u _ {pq}  ^ {(} k) $,  
+
$  u _ {pq}  ^ {(k)} $,  
$  u _ {qp}  ^ {(} k) $.  
+
$  u _ {qp}  ^ {(k)} $.  
 
In the real case, when  $  A $
 
In the real case, when  $  A $
 
is a [[Symmetric matrix|symmetric matrix]],
 
is a [[Symmetric matrix|symmetric matrix]],
Line 34: Line 34:
 
$$ \tag{* }
 
$$ \tag{* }
 
\left .
 
\left .
 +
 +
\begin{array}{c}
 +
{u _ {pp}  ^ {(k)}  =  u _ {qq}  ^ {(k)}  =  \cos  \phi ; \ \
 +
u _ {pq}  ^ {(k)}  =  - u _ {qp}  ^ {(k)}  =  \sin  \phi , }  \\
 +
{ \mathop{\rm tan}  2 \phi  = \
 +
 +
\frac{2a _ {pq} ^ {( k - 1) } }{a _ {pp} ^ {( k - 1) } - a _ {qq} ^ {( k - 1) } }
 +
,\ \
 +
| \phi |  \leq 
 +
\frac \pi {4}
 +
. }  \\
 +
\end{array}
 +
 +
\right \}
 +
$$
  
 
In the complex case the relations (*) become insignificantly more complicated.
 
In the complex case the relations (*) become insignificantly more complicated.
Line 41: Line 56:
 
the rate of the convergence being asymptotically quadratic. The diagonal entries of  $  \Lambda $
 
the rate of the convergence being asymptotically quadratic. The diagonal entries of  $  \Lambda $
 
are approximate eigenvalues of  $  A $,  
 
are approximate eigenvalues of  $  A $,  
while the columns of the matrix  $  U  ^ {(} k) = U _ {1} \dots U _ {k} $
+
while the columns of the matrix  $  U  ^ {(k)} = U _ {1} \dots U _ {k} $
 
are approximate eigenvectors.
 
are approximate eigenvectors.
  
Line 69: Line 84:
  
 
The difference formulas of the rotation method which are used to calculate (*) ensure the convergence of the process of the rotation method under practical conditions of computer arithmetic, and also ensure highly accurate values of both eigenvalues and eigenvectors [[#References|[5]]].
 
The difference formulas of the rotation method which are used to calculate (*) ensure the convergence of the process of the rotation method under practical conditions of computer arithmetic, and also ensure highly accurate values of both eigenvalues and eigenvectors [[#References|[5]]].
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  C.G.J. Jacobi,  "Ueber ein leichtes Verfahren, die in der Theorie der Säcularstörungen vorkommenden Gleichungen numerisch auflösen"  ''J. Reine Angew. Math.'' , '''30'''  (1846)  pp. 51–94</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  V.V. Voevodin,  "Numerical methods of algebra" , Moscow  (1966)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J.H. Wilkinson,  "The algebraic eigenvalue problem" , Oxford Univ. Press  (1969)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  V.V. Voevodin,  G.D. Kim,  "A program for finding eigen...."  ''Vychisl. Met. i Programmirovanie'' , Moscow  (1962)  pp. 269–277</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  G.D. Kim,  , ''Numerical analysis in Fortan'' , '''3''' , Moscow  (1973)  pp. 97–113  (In Russian)</TD></TR></table>
 
  
 
====Comments====
 
====Comments====
It is an overstatement to call this Jacobi method to be one of the most effective methods available today. Indeed, for a Hermitian matrix the so-called QR method (see [[#References|[a1]]] and [[Jacobi method|Jacobi method]]) has cubic convergence, whereas the cyclic version of the Jacobi method, e.g., has quadratic convergence only.
+
It is an overstatement to call this Jacobi method to be one of the most effective methods available today. Indeed, for a Hermitian matrix the so-called QR method (see [[#References|[a1]]] and [[Jacobi method]]) has cubic convergence, whereas the cyclic version of the Jacobi method, e.g., has quadratic convergence only.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  G.H. Golub,  C.F. van Loan,  "Matrix computations" , Johns Hopkins Univ. Press  (1989)</TD></TR></table>
+
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  C.G.J. Jacobi,  "Ueber ein leichtes Verfahren, die in der Theorie der Säcularstörungen vorkommenden Gleichungen numerisch auflösen"  ''J. Reine Angew. Math.'' , '''30'''  (1846)  pp. 51–94</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  V.V. Voevodin,  "Numerical methods of algebra" , Moscow  (1966)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J.H. Wilkinson,  "The algebraic eigenvalue problem" , Oxford Univ. Press  (1969)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  V.V. Voevodin,  G.D. Kim,  "A program for finding eigen...."  ''Vychisl. Met. i Programmirovanie'' , Moscow  (1962)  pp. 269–277</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  G.D. Kim,  , ''Numerical analysis in Fortan'' , '''3''' , Moscow  (1973)  pp. 97–113  (In Russian)</TD></TR><TR><TD valign="top">[a1]</TD> <TD valign="top">  G.H. Golub,  C.F. van Loan,  "Matrix computations" , Johns Hopkins Univ. Press  (1989)</TD></TR></table>

Latest revision as of 19:17, 17 January 2024


Jacobi method

A method for the solution of the complete problem of eigenvalues for a Hermitian matrix, based on a similarity transformation of the Hermitian matrix to diagonal form using a sequence of planar rotations. The rotation method is an iterative method; it has a simple computational scheme and is always convergent, the rate of convergence being asymptotically quadratic. The presence of multiple or close eigenvalues in the matrix presents no difficulties. The method makes it possible to compute eigenvalues both by finding the eigenvectors and without them. The system of eigenvectors computed by the rotation method is orthonormal.

The ideas underlying the method were presented in [1]. In its modern form it is one of the most advanced and effective methods implemented on a computer for solving the complete problem of eigenvalues of a matrix.

The classical rotation method involves the construction of a sequence of matrices $ A _ {0} , A _ {1} \dots $ where $ A _ {0} = A $ is the initial matrix, $ A _ {k} = U _ {k} ^ {*} A _ {k-} 1 U _ {k} $, and $ U _ {k} $ is the matrix of the planar rotation which annihilates the off-diagonal entry of maximum modulus of the matrix $ A _ {k-} 1 $. Here, if $ A _ {k} = \| a _ {ij} ^ {(k)} \| $, $ | a _ {pq} ^ {( k - 1) } | = \max _ {i \neq j } | a _ {ij} ^ {( k - 1) } | $, the matrix $ U _ {k} = \| u _ {ij} ^ {(k)} \| $ differs from the identity matrix only by the entries $ u _ {pp} ^ {(k)} $, $ u _ {qq} ^ {(k)} $, $ u _ {pq} ^ {(k)} $, $ u _ {qp} ^ {(k)} $. In the real case, when $ A $ is a symmetric matrix,

$$ \tag{* } \left . \begin{array}{c} {u _ {pp} ^ {(k)} = u _ {qq} ^ {(k)} = \cos \phi ; \ \ u _ {pq} ^ {(k)} = - u _ {qp} ^ {(k)} = \sin \phi , } \\ { \mathop{\rm tan} 2 \phi = \ \frac{2a _ {pq} ^ {( k - 1) } }{a _ {pp} ^ {( k - 1) } - a _ {qq} ^ {( k - 1) } } ,\ \ | \phi | \leq \frac \pi {4} . } \\ \end{array} \right \} $$

In the complex case the relations (*) become insignificantly more complicated.

The sequence of matrices $ A _ {k} $ converges to a diagonal matrix $ \Lambda $, the rate of the convergence being asymptotically quadratic. The diagonal entries of $ \Lambda $ are approximate eigenvalues of $ A $, while the columns of the matrix $ U ^ {(k)} = U _ {1} \dots U _ {k} $ are approximate eigenvectors.

The realization of this variant of the rotation method involves the choice of the off-diagonal matrix entry of maximum modulus at each step. The realization of this operation on a computer requires considerable computational labour.

There exist other variants of the rotation method, which are more effective in this respect: the cyclic rotation method, the rotation method with a barrier, and the rotation method with selection of an optimal element.

In the cyclic rotation method the pairs of indices $ ( p, q ) $ of the entry that has to be annihilated cyclically run through all the upper-diagonal locations. The drawback of this method is that it may be necessary to carry out a large number of ineffective rotations that annihilate small off-diagonal entries.

This disadvantage is partly eliminated in the rotation method with a barrier, which involves the introduction of a sequence of numbers $ \alpha _ {1} , \alpha _ {2} \dots $ called barriers, which monotonically decreases towards zero. During the cyclic review of indices $ ( p, q) $ only those off-diagonal entries with modulus larger than $ \alpha _ {1} $ are annihilated. In the following stage, after all the off-diagonal entries have become smaller than $ \alpha _ {1} $, the barrier $ \alpha _ {1} $ is replaced by the barrier $ \alpha _ {2} $ and the process is continued. Application of this variant of the rotation method in practical work involves several difficulties, concerning the choice of an optimal barrier.

The method which is best suited for use in computational practice is the rotation method with selection of an optimal element [4]. In this method the pairs of indices $ ( p, q) $ correspond to the almost-maximal entry and are so chosen that $ p $ is the number of a row with maximal Euclidean length, and $ q $ is the number of a column of an off-diagonal entry of maximum modulus in the $ p $- th row. Since the rows of the matrix, except for the $ p $- th and $ q $- th row, do not vary in length at any stage of the process, the choice of the indices $ ( p, q) $ does not significantly increase the computational labour involved. The entire theory of the classical rotation method is fully applicable to this modification [2].

The difference formulas of the rotation method which are used to calculate (*) ensure the convergence of the process of the rotation method under practical conditions of computer arithmetic, and also ensure highly accurate values of both eigenvalues and eigenvectors [5].

Comments

It is an overstatement to call this Jacobi method to be one of the most effective methods available today. Indeed, for a Hermitian matrix the so-called QR method (see [a1] and Jacobi method) has cubic convergence, whereas the cyclic version of the Jacobi method, e.g., has quadratic convergence only.

References

[1] C.G.J. Jacobi, "Ueber ein leichtes Verfahren, die in der Theorie der Säcularstörungen vorkommenden Gleichungen numerisch auflösen" J. Reine Angew. Math. , 30 (1846) pp. 51–94
[2] V.V. Voevodin, "Numerical methods of algebra" , Moscow (1966) (In Russian)
[3] J.H. Wilkinson, "The algebraic eigenvalue problem" , Oxford Univ. Press (1969)
[4] V.V. Voevodin, G.D. Kim, "A program for finding eigen...." Vychisl. Met. i Programmirovanie , Moscow (1962) pp. 269–277
[5] G.D. Kim, , Numerical analysis in Fortan , 3 , Moscow (1973) pp. 97–113 (In Russian)
[a1] G.H. Golub, C.F. van Loan, "Matrix computations" , Johns Hopkins Univ. Press (1989)
How to Cite This Entry:
Rotation method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Rotation_method&oldid=48589
This article was adapted from an original article by G.D. Kim (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article