Namespaces
Variants
Actions

Gauss method

From Encyclopedia of Mathematics
Jump to: navigation, search


A method of successive elimination of unknowns in solving a set of linear equations, first introduced by C.F. Gauss [1]. Let there be given a system

$$ \tag{$S ^ {0$} } f _ {j} ( x) - a _ {j} = \ a _ {j1} x _ {1} + \dots + a _ {jn} x _ {n} - a _ {j} = 0, $$

$$ j = 1 \dots m, $$

where $ a _ {ji} , a _ {j} $ are elements of an arbitrary field $ P $. Without loss of generality one may assume that $ a _ {11} \neq 0 $. The Gauss method operates as follows. One subtracts the first equation of the set, multiplied by $ a _ {21} / a _ {11} $ term by term, from the second equation; then one subtracts the first equation multiplied by $ a _ {31} / a _ {11} $ from the third equation; $ \dots, $ then one subtracts the first equation multiplied by $ a _ {m1} / a _ {11} $ from the $ m $- th equation. Let $ S ^ {1} $ be the system of subtracted equations thus obtained. If a coefficient in $ S ^ {1} $ is non-zero (possibly after changing the sequence of equations and unknowns), $ S ^ {1} $ is treated as $ S ^ {0} $, etc. If the rank $ r $ of the system $ S ^ {0} $( i.e. the rank of the matrix of its coefficients) is smaller than $ m $, then the $ r $- th step will yield a system $ S ^ {r} $ in which all coefficients at the unknowns are zero; if $ r = m $, the system is said to be empty. The system $ S ^ {0} $ is compatible if and only if the system $ S ^ {r} $ is either compatible (i.e. has no non-zero free terms) or is empty.

The process of obtaining one solution of a (compatible) system $ S ^ {0} $ may be described as follows. One takes some solution $ ( x _ {r} ^ {0} \dots x _ {n} ^ {0} ) $ of the system $ S ^ {r-} 1 $. By assigning the values $ x _ {r} ^ {0} \dots x _ {n} ^ {0} $ to the unknowns $ x _ {r} \dots x _ {n} $ in some equations of the system $ S ^ {r-} 2 $ with a non-zero coefficient at $ x _ {r-} 1 $( e.g. to its first equation), one finds $ x _ {r-} 1 = x _ {r-} 1 ^ {0} $ and obtains a solution $ ( x _ {r-} 1 ^ {0} , x _ {r} ^ {0} \dots x _ {n} ^ {0} ) $ of $ S ^ {r-} 2 $. In other words, the value $ x _ {r-} 1 ^ {0} $ is obtained from $ S ^ {r-} 2 $ by replacing the unknowns $ x _ {r} \dots x _ {n} $ by the values taken for them. The values $ x _ {r-} 1 ^ {0} \dots x _ {n} ^ {0} $ are then substituted in $ S ^ {r-} 3 $, the value $ x _ {r-} 2 = x _ {r-} 2 ^ {0} $ is found, and hence the solution $ ( x _ {r-} 2 ^ {0} \dots x _ {n} ^ {0} ) $, etc. The values $ x _ {1} ^ {0} \dots x _ {r-} 1 ^ {0} $ thus found, together with the values $ x _ {r} ^ {0} \dots x _ {n} ^ {0} $ taken, constitute a solution $ ( x _ {1} ^ {0} \dots x _ {n} ^ {0} ) $ of $ S ^ {0} $[2].

This method may be generalized as follows [4]. Let $ U $ be some subspace of the vector space $ P ^ {n} $ and let $ P ^ {m} ( U) $ be the set of all solutions $ ( p _ {1} \dots p _ {m} ) $ of the equation

$$ \tag{* } p _ {1} f _ {1} ( x) + \dots + p _ {m} f _ {m} ( x) = 0, $$

where $ x $ runs through $ U $. For an arbitrary finite system

$$ p ^ {i} = ( p _ {1} ^ {i} \dots p _ {m} ^ {i} ),\ \ i = 1 \dots l , $$

of non-zero generating elements of $ P ^ {m} ( U) $ it is possible to construct a system

$$ \sum _ {j = 1 } ^ { m } p _ {j} ^ {i} f _ {j} ( x) - \sum _ {j = 1 } ^ { m } p _ {j} ^ {i} a _ {j} = 0,\ \ i = 1 \dots l $$

(where $ x $ is the unknown), known as the $ U $- convolution of $ S ^ {0} $. If $ P ^ {m} ( U) $ contains no non-zero elements, one says that the system $ S ^ {0} $ has an empty $ U $- convolution. If the system $ S ^ {0} $ is compatible, then the $ U $- convolution of any $ U $ is compatible or is empty. It was found that, for the system $ S ^ {0} $ to be compatible, it is sufficient that the $ U $- convolution is empty or compatible for only one $ U $. Again, let $ U _ {1} \dots U _ {n} $ be subspaces generated in the space $ P ^ {n} $ by the vectors $ e _ {1} = ( 1, 0 \dots 0) \dots e _ {n} = ( 0, 0 \dots 1) $. For $ U = U _ {i} $ equation (*) is reduced to the equation

$$ a _ {1i} p _ {1} + \dots + a _ {mi} p _ {m} = 0. $$

Let, for example, $ i = 1 $. If also $ a _ {11} \neq 0 $, then the vectors $ ( a _ {21} / a _ {11} , - 1, 0 \dots 0) \dots ( a _ {m1} / a _ {11} , 0, 0 \dots - 1) $ may be taken as the non-zero generating elements of the space $ P ^ {m} ( U _ {1} ) $, and the $ U _ {1} $- convolution of the system $ S ^ {0} $ will coincide with the procedure of elimination of the unknown $ x _ {1} $ in Gauss' method.

The $ U $- convolution of the system $ S ^ {0} $ for $ U = U _ {i} + U _ {k} $ is a procedure for the simultaneous elimination of two unknowns $ x _ {i} $ and $ x _ {k} $. For example, let $ i = 1 $ and $ k = 2 $. If also

$$ \Delta _ {12} = \left | \begin{array}{ll} a _ {11} &a _ {21} \\ a _ {12} &a _ {22} \\ \end{array} \right | \neq 0, $$

then the rows of the matrix

$$ \left \| \begin{array}{cccccc} - \Delta _ {23} &\Delta _ {13} &- \Delta _ {12} & 0 &\dots & 0 \\ - \Delta _ {24} &\Delta _ {14} & 0 &- \Delta _ {12} &\dots & 0 \\ \dots &\dots &\dots &\dots &{} &\dots \\ - \Delta _ {2m} &\Delta _ {1m} & 0 & 0 &\dots &- \Delta _ {12} \\ \end{array} \right \| , $$

where

$$ \Delta _ {rs} = \ \left | \begin{array}{ll} a _ {r1} &a _ {s1} \\ a _ {r2} &a _ {s2} \\ \end{array} \right | , $$

may be used to obtain the $ ( U _ {1} + U _ {2} ) $- convolution of the system $ S ^ {0} $. By a suitable elimination of individual unknowns with eliminations of certain pairs (or, generally, samples) of unknowns, the system $ S ^ {0} $ may be solved by using algorithms which are generalizations of the Gauss method.

References

[1] C.F. Gauss, "Beiträge zur Theorie der algebraischen Gleichungen" , Werke , 3 , K. Gesellschaft Wissenschaft. Göttingen (1876) pp. 71–102
[2] A.G. Kurosh, "Higher algebra" , MIR (1972) (Translated from Russian)
[3] D.K. Faddeev, V.N. Faddeeva, "Computational methods of linear algebra" , Freeman (1963) (Translated from Russian)
[4] S.N. Chernikov, "Lineare Ungleichungen" , Deutsch. Verlag Wissenschaft. (1971) (Translated from Russian)

Comments

There are a number of variants of this method, mostly based on practical implementation considerations (like the methods of Crout and Doolittle) or efficiency (like the method of Choleski for symmetric systems).

In the Western literature, the notions of LU-decomposition, forward elimination and back substitution are often associated with Gauss' method (which is also called the Gaussian elimination method). Consider the particular case where the matrix of coefficients $ A $ in the system $ A x = a $ is a square matrix $ ( m = n ) $. Then LU-decomposition refers to the decomposition of $ A $ into a lower- and upper-triangular matrix $ L $ and $ U $, i.e., $ A = L U $. Forward elimination and back substitution are understood to be the solution of the triangular systems $ L y = a $ and $ U x = y $, respectively. By using the concept of a flop (as introduced by C.B. Moler), that is, the amount of work involved in performing a floating point addition, a floating point multiplication and a little subscripting, it can be shown that LU-decomposition, forward elimination and back substitution require $ m ^ {3} / 3 $, $ m ^ {2} / 2 $ and $ m ^ {2} / 2 $ flops, respectively.

For numerical purposes it is not sufficient to ensure that the coefficients $ a _ {11} $, etc., the pivots, are just non-zero, but they are the best possible; if the absolutely largest $ a _ {j} $ is used as a pivot, then this is called partial pivoting. If the absolutely largest element in the entire matrix (or submatrix at later stages) is used as a pivot, this is being called complete pivoting. Partial pivoting to make LU-decomposition possible in case the matrix $ A $ has singular leading principal submatrices amounts to interchanging rows. It requires $ O ( m ^ {2} ) $ comparisons. With such a strategy the method can be shown to be numerically stable (see Stability of a computational algorithm; Stability of a computational process).

An excellent book on numerical linear algebra is [a1]. The problem of numerical stability in Gaussian elimination is discussed in [a6].

Fortan routines can be found in [a4]; for an older Algol version see [a5].

References

[a1] G.H. Golub, C.F. van Loan, "Matrix computations" , North Oxford Acad. (1983)
[a2] J.H. Wilkinson, "The algebraic eigenvalue problem" , Clarendon Press (1965)
[a3] G. Strang, "Linear algebra and its application" , Acad. Press (1976)
[a4] H. Dongarra, J.R. Bunch, C.B. Moler, G.W. Stewart, "LINPACK users guide" , SIAM (1978)
[a5] J.H. Wilkinson, C. Reinsch, "Handbook for automatic computation" , 2. Linear algebra , Springer (1971)
[a6] P.A. Bussinger, "Monotoring the numerical stability of Gaussian elimination" Numer. Math. , 16 (1971) pp. 360–361
How to Cite This Entry:
Gauss method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Gauss_method&oldid=47050
This article was adapted from an original article by S.N. Chernikov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article