Gauss method

From Encyclopedia of Mathematics
Jump to: navigation, search

A method of successive elimination of unknowns in solving a set of linear equations, first introduced by C.F. Gauss [1]. Let there be given a system

where are elements of an arbitrary field . Without loss of generality one may assume that . The Gauss method operates as follows. One subtracts the first equation of the set, multiplied by term by term, from the second equation; then one subtracts the first equation multiplied by from the third equation; then one subtracts the first equation multiplied by from the -th equation. Let be the system of subtracted equations thus obtained. If a coefficient in is non-zero (possibly after changing the sequence of equations and unknowns), is treated as , etc. If the rank of the system (i.e. the rank of the matrix of its coefficients) is smaller than , then the -th step will yield a system in which all coefficients at the unknowns are zero; if , the system is said to be empty. The system is compatible if and only if the system is either compatible (i.e. has no non-zero free terms) or is empty.

The process of obtaining one solution of a (compatible) system may be described as follows. One takes some solution of the system . By assigning the values to the unknowns in some equations of the system with a non-zero coefficient at (e.g. to its first equation), one finds and obtains a solution of . In other words, the value is obtained from by replacing the unknowns by the values taken for them. The values are then substituted in , the value is found, and hence the solution , etc. The values thus found, together with the values taken, constitute a solution of [2].

This method may be generalized as follows [4]. Let be some subspace of the vector space and let be the set of all solutions of the equation


where runs through . For an arbitrary finite system

of non-zero generating elements of it is possible to construct a system

(where is the unknown), known as the -convolution of . If contains no non-zero elements, one says that the system has an empty -convolution. If the system is compatible, then the -convolution of any is compatible or is empty. It was found that, for the system to be compatible, it is sufficient that the -convolution is empty or compatible for only one . Again, let be subspaces generated in the space by the vectors . For equation (*) is reduced to the equation

Let, for example, . If also , then the vectors may be taken as the non-zero generating elements of the space , and the -convolution of the system will coincide with the procedure of elimination of the unknown in Gauss' method.

The -convolution of the system for is a procedure for the simultaneous elimination of two unknowns and . For example, let and . If also

then the rows of the matrix


may be used to obtain the -convolution of the system . By a suitable elimination of individual unknowns with eliminations of certain pairs (or, generally, samples) of unknowns, the system may be solved by using algorithms which are generalizations of the Gauss method.


[1] C.F. Gauss, "Beiträge zur Theorie der algebraischen Gleichungen" , Werke , 3 , K. Gesellschaft Wissenschaft. Göttingen (1876) pp. 71–102
[2] A.G. Kurosh, "Higher algebra" , MIR (1972) (Translated from Russian)
[3] D.K. Faddeev, V.N. Faddeeva, "Computational methods of linear algebra" , Freeman (1963) (Translated from Russian)
[4] S.N. Chernikov, "Lineare Ungleichungen" , Deutsch. Verlag Wissenschaft. (1971) (Translated from Russian)


There are a number of variants of this method, mostly based on practical implementation considerations (like the methods of Crout and Doolittle) or efficiency (like the method of Choleski for symmetric systems).

In the Western literature, the notions of LU-decomposition, forward elimination and back substitution are often associated with Gauss' method (which is also called the Gaussian elimination method). Consider the particular case where the matrix of coefficients in the system is a square matrix . Then LU-decomposition refers to the decomposition of into a lower- and upper-triangular matrix and , i.e., . Forward elimination and back substitution are understood to be the solution of the triangular systems and , respectively. By using the concept of a flop (as introduced by C.B. Moler), that is, the amount of work involved in performing a floating point addition, a floating point multiplication and a little subscripting, it can be shown that LU-decomposition, forward elimination and back substitution require , and flops, respectively.

For numerical purposes it is not sufficient to ensure that the coefficients , etc., the pivots, are just non-zero, but they are the best possible; if the absolutely largest is used as a pivot, then this is called partial pivoting. If the absolutely largest element in the entire matrix (or submatrix at later stages) is used as a pivot, this is being called complete pivoting. Partial pivoting to make LU-decomposition possible in case the matrix has singular leading principal submatrices amounts to interchanging rows. It requires comparisons. With such a strategy the method can be shown to be numerically stable (see Stability of a computational algorithm; Stability of a computational process).

An excellent book on numerical linear algebra is [a1]. The problem of numerical stability in Gaussian elimination is discussed in [a6].

Fortan routines can be found in [a4]; for an older Algol version see [a5].


[a1] G.H. Golub, C.F. van Loan, "Matrix computations" , North Oxford Acad. (1983)
[a2] J.H. Wilkinson, "The algebraic eigenvalue problem" , Clarendon Press (1965)
[a3] G. Strang, "Linear algebra and its application" , Acad. Press (1976)
[a4] H. Dongarra, J.R. Bunch, C.B. Moler, G.W. Stewart, "LINPACK users guide" , SIAM (1978)
[a5] J.H. Wilkinson, C. Reinsch, "Handbook for automatic computation" , 2. Linear algebra , Springer (1971)
[a6] P.A. Bussinger, "Monotoring the numerical stability of Gaussian elimination" Numer. Math. , 16 (1971) pp. 360–361
How to Cite This Entry:
Gauss method. S.N. Chernikov (originator), Encyclopedia of Mathematics. URL:
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098