Namespaces
Variants
Actions

Difference between revisions of "Seidel method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
An iterative method for solving a system of linear algebraic equations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s0838101.png" />. The solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s0838102.png" /> is found as the limit of a sequence
+
<!--
 +
s0838101.png
 +
$#A+1 = 29 n = 0
 +
$#C+1 = 29 : ~/encyclopedia/old_files/data/S083/S.0803810 Seidel method
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s0838103.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
 +
An iterative method for solving a system of linear algebraic equations  $  Ax = b $.
 +
The solution  $  x  ^ {*} $
 +
is found as the limit of a sequence
 +
 
 +
$$
 +
x  ^ {(} k)  = ( x _ {1}  ^ {(} k) \dots x _ {n}  ^ {(} k) ) ,
 +
$$
  
 
the terms of which are computed from the formula
 
the terms of which are computed from the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s0838104.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
$$ \tag{* }
 +
x _ {i}  ^ {(} k)  = - \sum _ { j= } 1 ^ { i- }  1
 +
 +
\frac{a _ ij}{a _ ii}
 +
  x _ {j} ( k) - \sum _ { j = i+ 1 } ^ { n } 
 +
 +
\frac{a _ ij}{a _ ii }  x _ {j}  ^ {(} k- 1) +
 +
 +
\frac{b _ i}{a _ ii}
 +
  ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s0838105.png" /></td> </tr></table>
+
$$
 +
= 1 \dots n ,
 +
$$
  
where the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s0838106.png" /> are the entries of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s0838107.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s0838108.png" /> the components of the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s0838109.png" />, and the diagonal entries of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381010.png" /> are assumed to be <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381011.png" />. The use of (*) differs from the [[Simple-iteration method|simple-iteration method]] only in the fact that, at step <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381012.png" />, computation of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381013.png" />-th component utilizes the previously computed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381014.png" />-th approximations of the first <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381015.png" /> components.
+
where the $  a _ {ij} $
 +
are the entries of the matrix $  A $,  
 +
$  b _ {i} $
 +
the components of the vector $  b $,  
 +
and the diagonal entries of $  A $
 +
are assumed to be $  \neq 0 $.  
 +
The use of (*) differs from the [[Simple-iteration method|simple-iteration method]] only in the fact that, at step $  k $,  
 +
computation of the $  i $-
 +
th component utilizes the previously computed $  k $-
 +
th approximations of the first $  i- 1 $
 +
components.
  
The Seidel method can be expressed in matrix notation as follows. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381016.png" />, where
+
The Seidel method can be expressed in matrix notation as follows. If $  A = B + C $,  
 +
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381017.png" /></td> </tr></table>
+
$$
 +
= \
 +
\left \|
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381018.png" /></td> </tr></table>
+
$$
 +
= \left \|
  
then formula (*) in matrix notation is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381019.png" />. The Seidel method is equivalent to simple iteration applied to the system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381020.png" />, which is equivalent to the initial system. The method is convergent if and only if all the eigenvalues of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381021.png" /> are less than 1 in absolute value, or, equivalently, if all roots of the equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381022.png" /> are less than 1 in absolute value.
+
then formula (*) in matrix notation is $  x  ^ {(} k) = - B  ^ {-} 1 Cx  ^ {(} k- 1) + B  ^ {-} 1 b $.  
 +
The Seidel method is equivalent to simple iteration applied to the system $  x = - B  ^ {-} 1 Cx + B  ^ {-} 1 b $,  
 +
which is equivalent to the initial system. The method is convergent if and only if all the eigenvalues of the matrix $  B  ^ {-} 1 C $
 +
are less than 1 in absolute value, or, equivalently, if all roots of the equation $  \mathop{\rm det}  ( C + B \lambda ) = 0 $
 +
are less than 1 in absolute value.
  
In practical work, the following sufficient conditions for convergence are more convenient. 1) Suppose that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381023.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381024.png" />, for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381025.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381026.png" />; then the Seidel method is convergent and the following estimate holds for the rate of convergence:
+
In practical work, the following sufficient conditions for convergence are more convenient. 1) Suppose that $  \sum _ {i \neq j }  | a _ {ij} | \leq  q  | a _ {ii} | $,  
 +
$  q < 1 $,  
 +
for all $  i $,  
 +
$  1 \leq  i < n $;  
 +
then the Seidel method is convergent and the following estimate holds for the rate of convergence:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381027.png" /></td> </tr></table>
+
$$
 +
\| x  ^ {(} k) - x  ^ {*} \| _ {1}  \leq  q  \| x  ^ {(} k- 1) - x  ^ {*} \| ,\  \| x \| _ {1}  = \max _ {1 \leq  i \leq  n }  | x _ {i} | .
 +
$$
  
2) If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381028.png" /> is a positive-definite Hermitian matrix, then the Seidel method is convergent.
+
2) If $  A $
 +
is a positive-definite Hermitian matrix, then the Seidel method is convergent.
  
 
The method falls into the category of relaxation methods (cf. [[Relaxation method|Relaxation method]]), the most widely used of which is the [[Hyperrelaxation method|hyperrelaxation method]].
 
The method falls into the category of relaxation methods (cf. [[Relaxation method|Relaxation method]]), the most widely used of which is the [[Hyperrelaxation method|hyperrelaxation method]].
  
Among the available modifications of the Seidel method are methods that employ preliminary transformation of the system into an equivalent system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083810/s08381029.png" /> (see [[#References|[4]]]).
+
Among the available modifications of the Seidel method are methods that employ preliminary transformation of the system into an equivalent system $  x = Mx + f $(
 +
see [[#References|[4]]]).
  
 
The method was proposed by L. Seidel in [[#References|[1]]].
 
The method was proposed by L. Seidel in [[#References|[1]]].
Line 33: Line 87:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L. Seidel,  ''Abh. Bayer. Akad. Wiss. Math.-Naturwiss. Kl.'' , '''11''' :  3  (1874)  pp. 81–108</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  D.K. Faddeev,  V.N. Faddeeva,  "Computational methods of linear algebra" , Freeman  (1963)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L. Seidel,  ''Abh. Bayer. Akad. Wiss. Math.-Naturwiss. Kl.'' , '''11''' :  3  (1874)  pp. 81–108</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  D.K. Faddeev,  V.N. Faddeeva,  "Computational methods of linear algebra" , Freeman  (1963)  (Translated from Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Revision as of 08:12, 6 June 2020


An iterative method for solving a system of linear algebraic equations $ Ax = b $. The solution $ x ^ {*} $ is found as the limit of a sequence

$$ x ^ {(} k) = ( x _ {1} ^ {(} k) \dots x _ {n} ^ {(} k) ) , $$

the terms of which are computed from the formula

$$ \tag{* } x _ {i} ^ {(} k) = - \sum _ { j= } 1 ^ { i- } 1 \frac{a _ ij}{a _ ii} x _ {j} ( k) - \sum _ { j = i+ 1 } ^ { n } \frac{a _ ij}{a _ ii } x _ {j} ^ {(} k- 1) + \frac{b _ i}{a _ ii} , $$

$$ i = 1 \dots n , $$

where the $ a _ {ij} $ are the entries of the matrix $ A $, $ b _ {i} $ the components of the vector $ b $, and the diagonal entries of $ A $ are assumed to be $ \neq 0 $. The use of (*) differs from the simple-iteration method only in the fact that, at step $ k $, computation of the $ i $- th component utilizes the previously computed $ k $- th approximations of the first $ i- 1 $ components.

The Seidel method can be expressed in matrix notation as follows. If $ A = B + C $, where

$$ B = \ \left \| $$ C = \left \|

then formula (*) in matrix notation is $ x ^ {(} k) = - B ^ {-} 1 Cx ^ {(} k- 1) + B ^ {-} 1 b $. The Seidel method is equivalent to simple iteration applied to the system $ x = - B ^ {-} 1 Cx + B ^ {-} 1 b $, which is equivalent to the initial system. The method is convergent if and only if all the eigenvalues of the matrix $ B ^ {-} 1 C $ are less than 1 in absolute value, or, equivalently, if all roots of the equation $ \mathop{\rm det} ( C + B \lambda ) = 0 $ are less than 1 in absolute value.

In practical work, the following sufficient conditions for convergence are more convenient. 1) Suppose that $ \sum _ {i \neq j } | a _ {ij} | \leq q | a _ {ii} | $, $ q < 1 $, for all $ i $, $ 1 \leq i < n $; then the Seidel method is convergent and the following estimate holds for the rate of convergence:

$$ \| x ^ {(} k) - x ^ {*} \| _ {1} \leq q \| x ^ {(} k- 1) - x ^ {*} \| ,\ \| x \| _ {1} = \max _ {1 \leq i \leq n } | x _ {i} | . $$

2) If $ A $ is a positive-definite Hermitian matrix, then the Seidel method is convergent.

The method falls into the category of relaxation methods (cf. Relaxation method), the most widely used of which is the hyperrelaxation method.

Among the available modifications of the Seidel method are methods that employ preliminary transformation of the system into an equivalent system $ x = Mx + f $( see [4]).

The method was proposed by L. Seidel in [1].

References

[1] L. Seidel, Abh. Bayer. Akad. Wiss. Math.-Naturwiss. Kl. , 11 : 3 (1874) pp. 81–108
[2] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[3] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian)
[4] D.K. Faddeev, V.N. Faddeeva, "Computational methods of linear algebra" , Freeman (1963) (Translated from Russian)

Comments

In the West this method is usually referred to as the Gauss–Seidel method, [a1][a4].

References

[a1] C.-E. Fröberg, "Introduction to numerical analysis" , Addison-Wesley (1965) pp. §4.5
[a2] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, "Numerical recipes" , Cambridge Univ. Press (1986) pp. §17.5
[a3] D.M. Young, R.T. Gregory, "A survey of numerical mathematics" , II , Dover, reprint (1988) pp. §16.5
[a4] F.B. Hildebrand, "Introduction to numerical analysis" , Dover, reprint (1987) pp. §10.5
How to Cite This Entry:
Seidel method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Seidel_method&oldid=18222
This article was adapted from an original article by G.D. Kim (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article