Namespaces
Variants
Actions

Difference between revisions of "Linear algebra, numerical methods in"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
l0590501.png
 +
$#A+1 = 60 n = 0
 +
$#C+1 = 60 : ~/encyclopedia/old_files/data/L059/L.0509050 Linear algebra, numerical methods in
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
The branch of numerical mathematics concerned with the mathematical description and investigation of processes for solving numerical problems in linear algebra.
 
The branch of numerical mathematics concerned with the mathematical description and investigation of processes for solving numerical problems in linear algebra.
  
Line 8: Line 20:
 
Suppose one is given a system
 
Suppose one is given a system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l0590501.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
A x  = b
 +
$$
  
with matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l0590502.png" /> and right-hand side <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l0590503.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l0590504.png" /> is non-singular, then the solution is given by the formula <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l0590505.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l0590506.png" /> is the inverse matrix to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l0590507.png" />. The main idea of direct methods consists in transforming (1) to an equivalent system for which the matrix of the system is easily inverted, and consequently it is easy to find a solution of the original system. Suppose that both sides of (1) are multiplied on the left by non-singular matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l0590508.png" />. Then the new system
+
with matrix $  A $
 +
and right-hand side $  b $.  
 +
If $  A $
 +
is non-singular, then the solution is given by the formula $  x = A  ^ {-} 1 b $,  
 +
where $  A  ^ {-} 1 $
 +
is the inverse matrix to $  A $.  
 +
The main idea of direct methods consists in transforming (1) to an equivalent system for which the matrix of the system is easily inverted, and consequently it is easy to find a solution of the original system. Suppose that both sides of (1) are multiplied on the left by non-singular matrices $  L _ {1} \dots L _ {k} $.  
 +
Then the new system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l0590509.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
L _ {k} \dots L _ {1} Ax  = L _ {k} \dots L _ {1} b
 +
$$
  
is equivalent to (1). The matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905010.png" /> can always be chosen in such a way that the matrix on the left-hand side of (2) is sufficiently simple, for example, triangular, diagonal or unitary. In this case it is easy to calculate <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905011.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905012.png" />.
+
is equivalent to (1). The matrices $  L _ {i} $
 +
can always be chosen in such a way that the matrix on the left-hand side of (2) is sufficiently simple, for example, triangular, diagonal or unitary. In this case it is easy to calculate $  A  ^ {-} 1 $
 +
and $  | A | $.
  
One of the first direct methods was the [[Gauss method|Gauss method]]. It uses lower triangular matrices (cf. [[Triangular matrix|Triangular matrix]]) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905013.png" /> and makes it possible to reduce the original system of equations to a system with an upper triangular matrix. This method is easily implemented on a computer; its scheme with choice of a principal element makes it possible to solve a system with an arbitrary matrix, and a compact scheme makes it possible to obtain results with increased accuracy by accumulating scalar products. Among the direct methods used in practice, the Gauss method requires the least amount of computational work.
+
One of the first direct methods was the [[Gauss method|Gauss method]]. It uses lower triangular matrices (cf. [[Triangular matrix|Triangular matrix]]) $  L _ {i} $
 +
and makes it possible to reduce the original system of equations to a system with an upper triangular matrix. This method is easily implemented on a computer; its scheme with choice of a principal element makes it possible to solve a system with an arbitrary matrix, and a compact scheme makes it possible to obtain results with increased accuracy by accumulating scalar products. Among the direct methods used in practice, the Gauss method requires the least amount of computational work.
  
Directly related to the Gauss method is Jordan's method and a modification of it, the method of optimal elimination (see [[#References|[2]]]). These methods use lower and upper triangular matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905014.png" /> and make it possible to reduce the original system to one with a diagonal matrix. In their characteristics both methods differ little from the Gauss method, but the second makes it possible to solve a system of double the order with the same computer memory.
+
Directly related to the Gauss method is Jordan's method and a modification of it, the method of optimal elimination (see [[#References|[2]]]). These methods use lower and upper triangular matrices $  L _ {i} $
 +
and make it possible to reduce the original system to one with a diagonal matrix. In their characteristics both methods differ little from the Gauss method, but the second makes it possible to solve a system of double the order with the same computer memory.
  
The stated methods belong to the group of so-called elimination methods. This name is explained by the fact that for every multiplication by a matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905015.png" /> one or more elements are eliminated in the matrix of the system. Elimination can be carried out not only by means of triangular matrices but also by means of unitary matrices. Various modifications of elimination methods are essentially connected with the decomposition of the matrix of the system (1) into the product of two triangular matrices or the product of a triangular and a unitary matrix.
+
The stated methods belong to the group of so-called elimination methods. This name is explained by the fact that for every multiplication by a matrix $  L _ {i} $
 +
one or more elements are eliminated in the matrix of the system. Elimination can be carried out not only by means of triangular matrices but also by means of unitary matrices. Various modifications of elimination methods are essentially connected with the decomposition of the matrix of the system (1) into the product of two triangular matrices or the product of a triangular and a unitary matrix.
  
 
Some methods, such as the [[Bordering method|bordering method]] and the [[Completion method|completion method]], are not elimination methods, although they are close to the Gauss method.
 
Some methods, such as the [[Bordering method|bordering method]] and the [[Completion method|completion method]], are not elimination methods, although they are close to the Gauss method.
  
Methods based on the construction of an auxiliary system of vectors, in some way connected with the matrix of the original system and orthogonal in some metric, have been widely propagated. One of the first methods of this group was the method of orthogonal rows. The matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905016.png" /> in it are lower triangular and the matrix of the system (2) is unitary. Methods based on orthogonalization have many merits, but are sensitive to the influence of rounding errors. On the basis of the development of methods of orthogonalization type the very effective method (for the solution of certain systems with sparse matrices) of conjugate directions has been created (see [[#References|[9]]], [[#References|[10]]]).
+
Methods based on the construction of an auxiliary system of vectors, in some way connected with the matrix of the original system and orthogonal in some metric, have been widely propagated. One of the first methods of this group was the method of orthogonal rows. The matrices $  L _ {i} $
 +
in it are lower triangular and the matrix of the system (2) is unitary. Methods based on orthogonalization have many merits, but are sensitive to the influence of rounding errors. On the basis of the development of methods of orthogonalization type the very effective method (for the solution of certain systems with sparse matrices) of conjugate directions has been created (see [[#References|[9]]], [[#References|[10]]]).
  
 
==Direct methods for determining the eigen values and eigen vectors of a matrix.==
 
==Direct methods for determining the eigen values and eigen vectors of a matrix.==
Suppose that for a matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905017.png" /> it is required to determine an eigen value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905018.png" /> and eigen vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905019.png" />, that is, to solve the equation
+
Suppose that for a matrix $  A $
 +
it is required to determine an eigen value $  \lambda $
 +
and eigen vector $  x $,  
 +
that is, to solve the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905020.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
A x  = \lambda x .
 +
$$
  
Under a change of variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905021.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905022.png" /> is a non-singular matrix, equation (3) goes over into an equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905023.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905024.png" />. Direct methods for solving the given problem consist of reducing the original matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905025.png" /> by finitely many similarity transformations to a matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905026.png" /> of such a simple form that it is easy to find the coefficients of the characteristic polynomial and the eigen vectors for it. The eigen values are found from the characteristic equation by one of the well-known methods. Numerical methods for the transition from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905027.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905028.png" /> are essentially little different from the numerical method for transforming (1) into (2). Many methods of similarity type are known (the methods of Krylov, Danilevskii, Hessenberg, etc., see [[#References|[1]]]). However, they have not been widely used in practice in view of their considerable numerical instability (see [[#References|[3]]]).
+
Under a change of variable $  x = C y $,  
 +
where $  C $
 +
is a non-singular matrix, equation (3) goes over into an equation $  B y = \lambda y $
 +
with $  B = C  ^ {-} 1 A C $.  
 +
Direct methods for solving the given problem consist of reducing the original matrix $  A $
 +
by finitely many similarity transformations to a matrix $  B $
 +
of such a simple form that it is easy to find the coefficients of the characteristic polynomial and the eigen vectors for it. The eigen values are found from the characteristic equation by one of the well-known methods. Numerical methods for the transition from $  A $
 +
to $  B $
 +
are essentially little different from the numerical method for transforming (1) into (2). Many methods of similarity type are known (the methods of Krylov, Danilevskii, Hessenberg, etc., see [[#References|[1]]]). However, they have not been widely used in practice in view of their considerable numerical instability (see [[#References|[3]]]).
  
 
One distinguishes between the complete eigen value problem, when one looks for all eigen values, and the partial problem, when one looks for some of them; the latter problem is more typical in the case of linear algebra problems which arise in the approximation of differential and integral equations via difference methods.
 
One distinguishes between the complete eigen value problem, when one looks for all eigen values, and the partial problem, when one looks for some of them; the latter problem is more typical in the case of linear algebra problems which arise in the approximation of differential and integral equations via difference methods.
  
 
==Iterative methods for solving a system of linear algebraic equations.==
 
==Iterative methods for solving a system of linear algebraic equations.==
These methods give the solution of the system (1) in the form of the limit of a sequence of certain vectors; the construction of these is carried out by a uniform process, called an iterative process. The main iterative process for the solution of (1) can be described by means of the following general scheme. Construct a sequence of vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905029.png" /> from the recurrence formulas
+
These methods give the solution of the system (1) in the form of the limit of a sequence of certain vectors; the construction of these is carried out by a uniform process, called an iterative process. The main iterative process for the solution of (1) can be described by means of the following general scheme. Construct a sequence of vectors $  x  ^ {(} 1) \dots x  ^ {(} k) \dots $
 +
from the recurrence formulas
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905030.png" /></td> </tr></table>
+
$$
 +
x  ^ {(} k)  = x  ^ {(} k- 1) + H  ^ {(} k)
 +
( b - A x  ^ {(} k- 1) ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905031.png" /> is a sequence of matrices and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905032.png" /> is the initial approximation, generally speaking arbitrary. Different choices of the sequence of matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905033.png" /> lead to different iterative processes.
+
where $  H  ^ {(} 1) , H  ^ {(} 2) \dots $
 +
is a sequence of matrices and $  x  ^ {(} 0) $
 +
is the initial approximation, generally speaking arbitrary. Different choices of the sequence of matrices $  H  ^ {(} k) $
 +
lead to different iterative processes.
  
The simplest iterative processes are stationary processes, in which the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905034.png" /> do not depend on the step number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905035.png" />; they are also called methods of simple iteration (see [[#References|[5]]]). If the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905036.png" /> is periodic, the process is called cyclic. Every cyclic process can be transformed into a stationary process, but usually such a transformation complicates the method. Non-stationary processes, in particular cyclic processes, are used to speed up the convergence of iterative processes (see [[#References|[5]]], [[#References|[6]]]). Among the methods for speeding up convergence, those that use the Chebyshev polynomials (see [[#References|[5]]], [[#References|[6]]]) and conjugate directions (see [[#References|[10]]]) take a special place.
+
The simplest iterative processes are stationary processes, in which the matrices $  H  ^ {(} k) $
 +
do not depend on the step number $  k $;  
 +
they are also called methods of simple iteration (see [[#References|[5]]]). If the sequence $  H  ^ {(} k) $
 +
is periodic, the process is called cyclic. Every cyclic process can be transformed into a stationary process, but usually such a transformation complicates the method. Non-stationary processes, in particular cyclic processes, are used to speed up the convergence of iterative processes (see [[#References|[5]]], [[#References|[6]]]). Among the methods for speeding up convergence, those that use the Chebyshev polynomials (see [[#References|[5]]], [[#References|[6]]]) and conjugate directions (see [[#References|[10]]]) take a special place.
  
The choice of a matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905037.png" /> for a stationary process and matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905038.png" /> for a non-stationary process can be made in various ways. It is possible to construct the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905039.png" /> in such a way that the iterative process converges to the solution as fast as possible for a wide class of systems of equations (see [[#References|[5]]]). The opposite approach is also possible, when in the construction of the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905040.png" /> maximal use is made of the peculiarities of the given system to obtain the iterative process with greatest rate of convergence (see [[#References|[6]]]). The second method of constructing the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905041.png" /> is more prevalent.
+
The choice of a matrix $  H $
 +
for a stationary process and matrices $  H  ^ {(} k) $
 +
for a non-stationary process can be made in various ways. It is possible to construct the matrices $  H  ^ {(} k) $
 +
in such a way that the iterative process converges to the solution as fast as possible for a wide class of systems of equations (see [[#References|[5]]]). The opposite approach is also possible, when in the construction of the matrices $  H  ^ {(} k) $
 +
maximal use is made of the peculiarities of the given system to obtain the iterative process with greatest rate of convergence (see [[#References|[6]]]). The second method of constructing the matrices $  H  ^ {(} k) $
 +
is more prevalent.
  
One of the most prevalent principles for constructing iterative processes is the relaxation principle (see [[Relaxation method|Relaxation method]]). In this case the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905042.png" /> are chosen from a class of matrices described earlier so that at each step of the process some quantity that characterizes the accuracy of the solution of the system is decreased. Among relaxation methods the ones most developed are coordinate and gradient methods. In coordinate methods the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905043.png" /> are chosen so that at each step one or more components of the successive approximations change(s). For the accuracy of the approximate solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905044.png" /> one most frequently uses the value of the error vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905045.png" />.
+
One of the most prevalent principles for constructing iterative processes is the relaxation principle (see [[Relaxation method|Relaxation method]]). In this case the matrices $  H  ^ {(} k) $
 +
are chosen from a class of matrices described earlier so that at each step of the process some quantity that characterizes the accuracy of the solution of the system is decreased. Among relaxation methods the ones most developed are coordinate and gradient methods. In coordinate methods the matrices $  H  ^ {(} k) $
 +
are chosen so that at each step one or more components of the successive approximations change(s). For the accuracy of the approximate solution $  x $
 +
one most frequently uses the value of the error vector $  r = A x - b $.
  
In the case of stationary iterative methods the main error term can also be estimated by means of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905046.png" />-process (see [[#References|[5]]]).
+
In the case of stationary iterative methods the main error term can also be estimated by means of the $  \delta  ^ {2} $-
 +
process (see [[#References|[5]]]).
  
 
Among other iterative methods one can mention the [[Simple-iteration method|simple-iteration method]], the [[Variable-directions method|variable-directions method]], the method of complete and incomplete relaxation, etc. (see [[#References|[1]]], [[#References|[6]]], [[#References|[10]]]). As a rule, iterative methods are convenient for realization on a computer, but in contrast to direct methods they most frequently have a very restricted range of application. In their range of application iterative methods infrequently exceed direct methods substantially in the amount of computation. Therefore, the question of comparing direct and indirect methods can be solved only by a detailed study of the properties of an actual system. The most prevalent are iterative methods for solving systems that arise in the difference approximation of differential equations (see [[#References|[5]]], [[#References|[6]]]).
 
Among other iterative methods one can mention the [[Simple-iteration method|simple-iteration method]], the [[Variable-directions method|variable-directions method]], the method of complete and incomplete relaxation, etc. (see [[#References|[1]]], [[#References|[6]]], [[#References|[10]]]). As a rule, iterative methods are convenient for realization on a computer, but in contrast to direct methods they most frequently have a very restricted range of application. In their range of application iterative methods infrequently exceed direct methods substantially in the amount of computation. Therefore, the question of comparing direct and indirect methods can be solved only by a detailed study of the properties of an actual system. The most prevalent are iterative methods for solving systems that arise in the difference approximation of differential equations (see [[#References|[5]]], [[#References|[6]]]).
Line 55: Line 116:
 
In iterative methods the eigen values are calculated as the limits of certain numerical sequences without previously determining the coefficients of the characteristic polynomial. As a rule, one simultaneously finds the eigen vectors or certain other vectors connected with simple eigen relations. The majority of iterative methods are less sensitive to rounding errors than direct methods, but significantly more laborious. The development of these methods and the introduction of them in practical calculations became possible only after the introduction of computers.
 
In iterative methods the eigen values are calculated as the limits of certain numerical sequences without previously determining the coefficients of the characteristic polynomial. As a rule, one simultaneously finds the eigen vectors or certain other vectors connected with simple eigen relations. The majority of iterative methods are less sensitive to rounding errors than direct methods, but significantly more laborious. The development of these methods and the introduction of them in practical calculations became possible only after the introduction of computers.
  
Among iterative methods a special place is taken by the method of rotations (the [[Jacobi method|Jacobi method]]) for solving the complete eigen value problem for a real symmetric matrix. It is based on the construction of a sequence of matrices orthogonally similar to the original matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905047.png" /> and such that the sum of the squares of all elements not on the main diagonal decreases monotonically to zero.
+
Among iterative methods a special place is taken by the method of rotations (the [[Jacobi method|Jacobi method]]) for solving the complete eigen value problem for a real symmetric matrix. It is based on the construction of a sequence of matrices orthogonally similar to the original matrix $  A $
 +
and such that the sum of the squares of all elements not on the main diagonal decreases monotonically to zero.
  
The Jacobi method is very simple, it is easily implemented on a computer and it always converges. Independently of the location of the eigen values it has asymptotic quadratic convergence. The presence of multiple and close eigen values not only does not slow down the convergence of the method, but on the contrary speeds it up. The Jacobi method is stable with respect to the influence of rounding errors in the results of intermediate calculations. The properties of this method are the reason for its prevalence in the solution of the complete eigen value problem for matrices of general form. The Jacobi method can be used, with no essential change, for Hermitian and skew-Hermitian matrices. An insignificant change in it makes it possible to solve successfully the complete eigen value problem for the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905048.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905049.png" /> at the same time, without calculating the products of the matrices themselves. An effective extension of this method to arbitrary matrices of a simple structure is realized by the generalized Jacobi method.
+
The Jacobi method is very simple, it is easily implemented on a computer and it always converges. Independently of the location of the eigen values it has asymptotic quadratic convergence. The presence of multiple and close eigen values not only does not slow down the convergence of the method, but on the contrary speeds it up. The Jacobi method is stable with respect to the influence of rounding errors in the results of intermediate calculations. The properties of this method are the reason for its prevalence in the solution of the complete eigen value problem for matrices of general form. The Jacobi method can be used, with no essential change, for Hermitian and skew-Hermitian matrices. An insignificant change in it makes it possible to solve successfully the complete eigen value problem for the matrices $  A  ^ {*} A $
 +
and $  A A  ^ {*} $
 +
at the same time, without calculating the products of the matrices themselves. An effective extension of this method to arbitrary matrices of a simple structure is realized by the generalized Jacobi method.
  
 
Among the iterative methods for solving the complete eigen value problem a significant group is formed by power methods. The majority of their computational algorithms are arranged according to the following scheme:
 
Among the iterative methods for solving the complete eigen value problem a significant group is formed by power methods. The majority of their computational algorithms are arranged according to the following scheme:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905050.png" /></td> </tr></table>
+
$$
 +
 
 +
\begin{array}{ccc}
 +
A G  = G _ {1} S _ {1}  &{}  &A  = L _ {1} R _ {1}  \\
 +
{\dots \dots }  &\textrm{ or }  &{\dots \dots }  \\
 +
A G _ {k-} 1  = G _ {k} S _ {k}  &{}  &R _ {k-} 1 L _ {k-} 1  = L _ {k} R _ {k}  \\
 +
{\dots \dots }  &{}  &{\dots \dots }  \\
 +
\end{array}
  
Here, at each step the matrix on the left of the equality sign splits into the product of two matrices. Suppose that one of them is triangular and the other unitary or triangular of a different type. Then, under certain additional assumptions, the matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905051.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905052.png" /> converge to a quasi-triangular matrix similar to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905053.png" />. As a rule, the orders of the diagonal cells of the quasi-triangular matrix are not large, so the complete eigen value problem for the limiting matrix is solved quite easily.
+
$$
  
A few methods of this type are known. One of the best of them is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905055.png" />-algorithm (see [[#References|[7]]]).
+
Here, at each step the matrix on the left of the equality sign splits into the product of two matrices. Suppose that one of them is triangular and the other unitary or triangular of a different type. Then, under certain additional assumptions, the matrices  $  G _ {k}  ^ {-} 1 A G _ {k} $
 +
and  $  R _ {k} L _ {k} $
 +
converge to a quasi-triangular matrix similar to  $  A $.
 +
As a rule, the orders of the diagonal cells of the quasi-triangular matrix are not large, so the complete eigen value problem for the limiting matrix is solved quite easily.
 +
 
 +
A few methods of this type are known. One of the best of them is the $  QR $-
 +
algorithm (see [[#References|[7]]]).
  
 
==Investigation and classification of numerical methods.==
 
==Investigation and classification of numerical methods.==
Line 72: Line 149:
 
The first papers on the analysis of stability and rounding errors in numerical methods for the solution of problems in linear algebra appeared only in 1947–1948 and were devoted to an investigation of the inversion of matrices and to the solution of systems of equations by methods of Gauss type. The practical value of these results was rather small. An important shift in the study of this question occurred in the middle of the 1960-s (see [[#References|[3]]]), when a decisive step was taken in the analysis and estimation of equivalent perturbations. The actual computational solution of a certain problem was interpreted as the exact solution of the same problem, but corresponding to perturbations of the initial data (so-called backward analysis). This perturbation, known as equivalent perturbation, completely characterizes the influence of rounding. Many methods were investigated from the point of view of giving a majorizing estimate of the norm of the equivalent perturbation (see [[#References|[3]]], [[#References|[7]]]).
 
The first papers on the analysis of stability and rounding errors in numerical methods for the solution of problems in linear algebra appeared only in 1947–1948 and were devoted to an investigation of the inversion of matrices and to the solution of systems of equations by methods of Gauss type. The practical value of these results was rather small. An important shift in the study of this question occurred in the middle of the 1960-s (see [[#References|[3]]]), when a decisive step was taken in the analysis and estimation of equivalent perturbations. The actual computational solution of a certain problem was interpreted as the exact solution of the same problem, but corresponding to perturbations of the initial data (so-called backward analysis). This perturbation, known as equivalent perturbation, completely characterizes the influence of rounding. Many methods were investigated from the point of view of giving a majorizing estimate of the norm of the equivalent perturbation (see [[#References|[3]]], [[#References|[7]]]).
  
For a fixed computational algorithm and method of rounding the whole collection of rounding errors is a single-valued vector function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905056.png" />, which depends on the number of digits <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905057.png" /> with which the calculation is carried out and on the original data <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905058.png" />. The principal term of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905059.png" />, as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905060.png" />, does not have any useful explicit expression. However, the investigation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059050/l05905061.png" /> in the class of randomly specified initial data has turned out to be very effective. It was shown that the most probable value of the deviation of the computed solution from the exact solution due to rounding errors is substantially less than the maximum possible value (see [[#References|[7]]]).
+
For a fixed computational algorithm and method of rounding the whole collection of rounding errors is a single-valued vector function $  \phi _ {t} ( A) $,  
 +
which depends on the number of digits $  t $
 +
with which the calculation is carried out and on the original data $  A $.  
 +
The principal term of the function $  \phi _ {t} ( A) $,  
 +
as $  t \rightarrow \infty $,  
 +
does not have any useful explicit expression. However, the investigation of $  \phi _ {t} ( A) $
 +
in the class of randomly specified initial data has turned out to be very effective. It was shown that the most probable value of the deviation of the computed solution from the exact solution due to rounding errors is substantially less than the maximum possible value (see [[#References|[7]]]).
  
 
An analysis of the influence of rounding errors showed that there is not much fundamental difference between the best methods from the point of view of stability, i.e. with respect to rounding errors. This conclusion forces one to look in a new way at the problem of choosing some computational method in practical computational work. The creation of powerful computers substantially weakened the significance of the difference between methods in such characteristics as the size of the required computer memory and the number of arithmetical operations, and led to increased requirements to guarantee the accuracy of the solution. Under these conditions the most preferable methods were those that, while not very different from the best methods in speed and convenience of implementation on a computer, make it possible to solve a wide class of problems, both well-conditioned and ill-conditioned, and to give an estimate of the accuracy of the computed solution.
 
An analysis of the influence of rounding errors showed that there is not much fundamental difference between the best methods from the point of view of stability, i.e. with respect to rounding errors. This conclusion forces one to look in a new way at the problem of choosing some computational method in practical computational work. The creation of powerful computers substantially weakened the significance of the difference between methods in such characteristics as the size of the required computer memory and the number of arithmetical operations, and led to increased requirements to guarantee the accuracy of the solution. Under these conditions the most preferable methods were those that, while not very different from the best methods in speed and convenience of implementation on a computer, make it possible to solve a wide class of problems, both well-conditioned and ill-conditioned, and to give an estimate of the accuracy of the computed solution.
Line 80: Line 163:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  D.K. Faddeev,  V.N. Faddeeva,  "Computational methods of linear algebra" , Freeman  (1963)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  V.V. Voevodin,  "Numerical methods of algebra" , Moscow  (1966)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J.H. Wilkinson,  "The algebraic eigenvalue problem" , Oxford Univ. Press  (1969)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  V.V. Voevodin,  ''Vestn. Moskov. Univ. Mat. Mekh.'' , '''2'''  (1970)  pp. 69–82</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  A.A. Samarskii,  E.S. Nikolaev,  "Numerical methods for grid equations" , '''1–2''' , Birkhäuser  (1989)  (Translated from Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  V.V. Voevodin,  "Computational foundations of linear algebra" , Moscow  (1977)  (In Russian)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  V.N. Faddeeva,  et al.,  "Computational methods of linear algebra. Bibliographic index, 1828–1974" , Novosibirsk  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  V.V. Voevodin,  "Algèbre linéare" , MIR  (1976)  (Translated from Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  G.I. Marchuk,  Yu.A. Kuznetsov,  "Iterative methods and quadratic functionals" , Novosibirsk  (1972)  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  D.K. Faddeev,  V.N. Faddeeva,  "Computational methods of linear algebra" , Freeman  (1963)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  V.V. Voevodin,  "Numerical methods of algebra" , Moscow  (1966)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J.H. Wilkinson,  "The algebraic eigenvalue problem" , Oxford Univ. Press  (1969)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  V.V. Voevodin,  ''Vestn. Moskov. Univ. Mat. Mekh.'' , '''2'''  (1970)  pp. 69–82</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  A.A. Samarskii,  E.S. Nikolaev,  "Numerical methods for grid equations" , '''1–2''' , Birkhäuser  (1989)  (Translated from Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  V.V. Voevodin,  "Computational foundations of linear algebra" , Moscow  (1977)  (In Russian)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  V.N. Faddeeva,  et al.,  "Computational methods of linear algebra. Bibliographic index, 1828–1974" , Novosibirsk  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  V.V. Voevodin,  "Algèbre linéare" , MIR  (1976)  (Translated from Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  G.I. Marchuk,  Yu.A. Kuznetsov,  "Iterative methods and quadratic functionals" , Novosibirsk  (1972)  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J.R. Westlake,  "A handbook of numerical matrix inversion and solution of linear equations" , Wiley  (1968)  pp. Sect. 2.6</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  A.S. Householder,  "Lectures on numerical algebra" , Math. Assoc. Amer.  (1972)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  A. Kiełbasiňski,  H. Schwetlick,  "Numerische lineare Algebra" , H. Deutsch  (1988)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  G.H. Golub,  C.F. van Loan,  "Matrix computations" , North Oxford Acad.  (1983)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J.R. Westlake,  "A handbook of numerical matrix inversion and solution of linear equations" , Wiley  (1968)  pp. Sect. 2.6</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  A.S. Householder,  "Lectures on numerical algebra" , Math. Assoc. Amer.  (1972)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  A. Kiełbasiňski,  H. Schwetlick,  "Numerische lineare Algebra" , H. Deutsch  (1988)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  G.H. Golub,  C.F. van Loan,  "Matrix computations" , North Oxford Acad.  (1983)</TD></TR></table>

Latest revision as of 22:16, 5 June 2020


The branch of numerical mathematics concerned with the mathematical description and investigation of processes for solving numerical problems in linear algebra.

Among the problems in linear algebra there are two that are the most important: the solution of a system of linear algebraic equations and the determination of the eigen values and eigen vectors of a matrix. Other problems frequently encountered are: the inversion of a matrix, the calculation of a determinant and the determination of the roots of an algebraic polynomial. These do not have, as a rule, independent significance in linear algebra and have an auxiliary character in the solution of its main problems.

Any numerical method in linear algebra can be regarded as a sequence of arithmetic operations carried out on elements of the input data. If for any input data a numerical method makes it possible to find a solution of the problem in finitely many arithmetic operations, such a method is called direct. Otherwise the numerical method is called iterative.

Direct methods for solving a system of linear algebraic equations.

Suppose one is given a system

$$ \tag{1 } A x = b $$

with matrix $ A $ and right-hand side $ b $. If $ A $ is non-singular, then the solution is given by the formula $ x = A ^ {-} 1 b $, where $ A ^ {-} 1 $ is the inverse matrix to $ A $. The main idea of direct methods consists in transforming (1) to an equivalent system for which the matrix of the system is easily inverted, and consequently it is easy to find a solution of the original system. Suppose that both sides of (1) are multiplied on the left by non-singular matrices $ L _ {1} \dots L _ {k} $. Then the new system

$$ \tag{2 } L _ {k} \dots L _ {1} Ax = L _ {k} \dots L _ {1} b $$

is equivalent to (1). The matrices $ L _ {i} $ can always be chosen in such a way that the matrix on the left-hand side of (2) is sufficiently simple, for example, triangular, diagonal or unitary. In this case it is easy to calculate $ A ^ {-} 1 $ and $ | A | $.

One of the first direct methods was the Gauss method. It uses lower triangular matrices (cf. Triangular matrix) $ L _ {i} $ and makes it possible to reduce the original system of equations to a system with an upper triangular matrix. This method is easily implemented on a computer; its scheme with choice of a principal element makes it possible to solve a system with an arbitrary matrix, and a compact scheme makes it possible to obtain results with increased accuracy by accumulating scalar products. Among the direct methods used in practice, the Gauss method requires the least amount of computational work.

Directly related to the Gauss method is Jordan's method and a modification of it, the method of optimal elimination (see [2]). These methods use lower and upper triangular matrices $ L _ {i} $ and make it possible to reduce the original system to one with a diagonal matrix. In their characteristics both methods differ little from the Gauss method, but the second makes it possible to solve a system of double the order with the same computer memory.

The stated methods belong to the group of so-called elimination methods. This name is explained by the fact that for every multiplication by a matrix $ L _ {i} $ one or more elements are eliminated in the matrix of the system. Elimination can be carried out not only by means of triangular matrices but also by means of unitary matrices. Various modifications of elimination methods are essentially connected with the decomposition of the matrix of the system (1) into the product of two triangular matrices or the product of a triangular and a unitary matrix.

Some methods, such as the bordering method and the completion method, are not elimination methods, although they are close to the Gauss method.

Methods based on the construction of an auxiliary system of vectors, in some way connected with the matrix of the original system and orthogonal in some metric, have been widely propagated. One of the first methods of this group was the method of orthogonal rows. The matrices $ L _ {i} $ in it are lower triangular and the matrix of the system (2) is unitary. Methods based on orthogonalization have many merits, but are sensitive to the influence of rounding errors. On the basis of the development of methods of orthogonalization type the very effective method (for the solution of certain systems with sparse matrices) of conjugate directions has been created (see [9], [10]).

Direct methods for determining the eigen values and eigen vectors of a matrix.

Suppose that for a matrix $ A $ it is required to determine an eigen value $ \lambda $ and eigen vector $ x $, that is, to solve the equation

$$ \tag{3 } A x = \lambda x . $$

Under a change of variable $ x = C y $, where $ C $ is a non-singular matrix, equation (3) goes over into an equation $ B y = \lambda y $ with $ B = C ^ {-} 1 A C $. Direct methods for solving the given problem consist of reducing the original matrix $ A $ by finitely many similarity transformations to a matrix $ B $ of such a simple form that it is easy to find the coefficients of the characteristic polynomial and the eigen vectors for it. The eigen values are found from the characteristic equation by one of the well-known methods. Numerical methods for the transition from $ A $ to $ B $ are essentially little different from the numerical method for transforming (1) into (2). Many methods of similarity type are known (the methods of Krylov, Danilevskii, Hessenberg, etc., see [1]). However, they have not been widely used in practice in view of their considerable numerical instability (see [3]).

One distinguishes between the complete eigen value problem, when one looks for all eigen values, and the partial problem, when one looks for some of them; the latter problem is more typical in the case of linear algebra problems which arise in the approximation of differential and integral equations via difference methods.

Iterative methods for solving a system of linear algebraic equations.

These methods give the solution of the system (1) in the form of the limit of a sequence of certain vectors; the construction of these is carried out by a uniform process, called an iterative process. The main iterative process for the solution of (1) can be described by means of the following general scheme. Construct a sequence of vectors $ x ^ {(} 1) \dots x ^ {(} k) \dots $ from the recurrence formulas

$$ x ^ {(} k) = x ^ {(} k- 1) + H ^ {(} k) ( b - A x ^ {(} k- 1) ) , $$

where $ H ^ {(} 1) , H ^ {(} 2) \dots $ is a sequence of matrices and $ x ^ {(} 0) $ is the initial approximation, generally speaking arbitrary. Different choices of the sequence of matrices $ H ^ {(} k) $ lead to different iterative processes.

The simplest iterative processes are stationary processes, in which the matrices $ H ^ {(} k) $ do not depend on the step number $ k $; they are also called methods of simple iteration (see [5]). If the sequence $ H ^ {(} k) $ is periodic, the process is called cyclic. Every cyclic process can be transformed into a stationary process, but usually such a transformation complicates the method. Non-stationary processes, in particular cyclic processes, are used to speed up the convergence of iterative processes (see [5], [6]). Among the methods for speeding up convergence, those that use the Chebyshev polynomials (see [5], [6]) and conjugate directions (see [10]) take a special place.

The choice of a matrix $ H $ for a stationary process and matrices $ H ^ {(} k) $ for a non-stationary process can be made in various ways. It is possible to construct the matrices $ H ^ {(} k) $ in such a way that the iterative process converges to the solution as fast as possible for a wide class of systems of equations (see [5]). The opposite approach is also possible, when in the construction of the matrices $ H ^ {(} k) $ maximal use is made of the peculiarities of the given system to obtain the iterative process with greatest rate of convergence (see [6]). The second method of constructing the matrices $ H ^ {(} k) $ is more prevalent.

One of the most prevalent principles for constructing iterative processes is the relaxation principle (see Relaxation method). In this case the matrices $ H ^ {(} k) $ are chosen from a class of matrices described earlier so that at each step of the process some quantity that characterizes the accuracy of the solution of the system is decreased. Among relaxation methods the ones most developed are coordinate and gradient methods. In coordinate methods the matrices $ H ^ {(} k) $ are chosen so that at each step one or more components of the successive approximations change(s). For the accuracy of the approximate solution $ x $ one most frequently uses the value of the error vector $ r = A x - b $.

In the case of stationary iterative methods the main error term can also be estimated by means of the $ \delta ^ {2} $- process (see [5]).

Among other iterative methods one can mention the simple-iteration method, the variable-directions method, the method of complete and incomplete relaxation, etc. (see [1], [6], [10]). As a rule, iterative methods are convenient for realization on a computer, but in contrast to direct methods they most frequently have a very restricted range of application. In their range of application iterative methods infrequently exceed direct methods substantially in the amount of computation. Therefore, the question of comparing direct and indirect methods can be solved only by a detailed study of the properties of an actual system. The most prevalent are iterative methods for solving systems that arise in the difference approximation of differential equations (see [5], [6]).

Iterative methods for solving the complete eigen value problem.

In iterative methods the eigen values are calculated as the limits of certain numerical sequences without previously determining the coefficients of the characteristic polynomial. As a rule, one simultaneously finds the eigen vectors or certain other vectors connected with simple eigen relations. The majority of iterative methods are less sensitive to rounding errors than direct methods, but significantly more laborious. The development of these methods and the introduction of them in practical calculations became possible only after the introduction of computers.

Among iterative methods a special place is taken by the method of rotations (the Jacobi method) for solving the complete eigen value problem for a real symmetric matrix. It is based on the construction of a sequence of matrices orthogonally similar to the original matrix $ A $ and such that the sum of the squares of all elements not on the main diagonal decreases monotonically to zero.

The Jacobi method is very simple, it is easily implemented on a computer and it always converges. Independently of the location of the eigen values it has asymptotic quadratic convergence. The presence of multiple and close eigen values not only does not slow down the convergence of the method, but on the contrary speeds it up. The Jacobi method is stable with respect to the influence of rounding errors in the results of intermediate calculations. The properties of this method are the reason for its prevalence in the solution of the complete eigen value problem for matrices of general form. The Jacobi method can be used, with no essential change, for Hermitian and skew-Hermitian matrices. An insignificant change in it makes it possible to solve successfully the complete eigen value problem for the matrices $ A ^ {*} A $ and $ A A ^ {*} $ at the same time, without calculating the products of the matrices themselves. An effective extension of this method to arbitrary matrices of a simple structure is realized by the generalized Jacobi method.

Among the iterative methods for solving the complete eigen value problem a significant group is formed by power methods. The majority of their computational algorithms are arranged according to the following scheme:

$$ \begin{array}{ccc} A G = G _ {1} S _ {1} &{} &A = L _ {1} R _ {1} \\ {\dots \dots } &\textrm{ or } &{\dots \dots } \\ A G _ {k-} 1 = G _ {k} S _ {k} &{} &R _ {k-} 1 L _ {k-} 1 = L _ {k} R _ {k} \\ {\dots \dots } &{} &{\dots \dots } \\ \end{array} $$

Here, at each step the matrix on the left of the equality sign splits into the product of two matrices. Suppose that one of them is triangular and the other unitary or triangular of a different type. Then, under certain additional assumptions, the matrices $ G _ {k} ^ {-} 1 A G _ {k} $ and $ R _ {k} L _ {k} $ converge to a quasi-triangular matrix similar to $ A $. As a rule, the orders of the diagonal cells of the quasi-triangular matrix are not large, so the complete eigen value problem for the limiting matrix is solved quite easily.

A few methods of this type are known. One of the best of them is the $ QR $- algorithm (see [7]).

Investigation and classification of numerical methods.

The vast number of numerical methods of linear algebra poses an actual problem, not so much because of the creation of new methods as in the investigation and classification of existing methods (one of the most complete classifications of methods from the point of view of their mathematical structure is contained in [1]; the monographs [2], [3], [7] are devoted to the description of methods from the point of view of their computer implementation).

The first papers on the analysis of stability and rounding errors in numerical methods for the solution of problems in linear algebra appeared only in 1947–1948 and were devoted to an investigation of the inversion of matrices and to the solution of systems of equations by methods of Gauss type. The practical value of these results was rather small. An important shift in the study of this question occurred in the middle of the 1960-s (see [3]), when a decisive step was taken in the analysis and estimation of equivalent perturbations. The actual computational solution of a certain problem was interpreted as the exact solution of the same problem, but corresponding to perturbations of the initial data (so-called backward analysis). This perturbation, known as equivalent perturbation, completely characterizes the influence of rounding. Many methods were investigated from the point of view of giving a majorizing estimate of the norm of the equivalent perturbation (see [3], [7]).

For a fixed computational algorithm and method of rounding the whole collection of rounding errors is a single-valued vector function $ \phi _ {t} ( A) $, which depends on the number of digits $ t $ with which the calculation is carried out and on the original data $ A $. The principal term of the function $ \phi _ {t} ( A) $, as $ t \rightarrow \infty $, does not have any useful explicit expression. However, the investigation of $ \phi _ {t} ( A) $ in the class of randomly specified initial data has turned out to be very effective. It was shown that the most probable value of the deviation of the computed solution from the exact solution due to rounding errors is substantially less than the maximum possible value (see [7]).

An analysis of the influence of rounding errors showed that there is not much fundamental difference between the best methods from the point of view of stability, i.e. with respect to rounding errors. This conclusion forces one to look in a new way at the problem of choosing some computational method in practical computational work. The creation of powerful computers substantially weakened the significance of the difference between methods in such characteristics as the size of the required computer memory and the number of arithmetical operations, and led to increased requirements to guarantee the accuracy of the solution. Under these conditions the most preferable methods were those that, while not very different from the best methods in speed and convenience of implementation on a computer, make it possible to solve a wide class of problems, both well-conditioned and ill-conditioned, and to give an estimate of the accuracy of the computed solution.

The classification and comparison of the numerical methods mentioned above refer mainly to the case when the whole collection of initial data is completely stored in the operational memory of the computer. However, there is special interest in two extreme cases — the solution of problems of linear algebra on small and on very powerful computers. The solution of problems on such computers has its own specific features.

References

[1] D.K. Faddeev, V.N. Faddeeva, "Computational methods of linear algebra" , Freeman (1963) (Translated from Russian)
[2] V.V. Voevodin, "Numerical methods of algebra" , Moscow (1966) (In Russian)
[3] J.H. Wilkinson, "The algebraic eigenvalue problem" , Oxford Univ. Press (1969)
[4] V.V. Voevodin, Vestn. Moskov. Univ. Mat. Mekh. , 2 (1970) pp. 69–82
[5] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[6] A.A. Samarskii, E.S. Nikolaev, "Numerical methods for grid equations" , 1–2 , Birkhäuser (1989) (Translated from Russian)
[7] V.V. Voevodin, "Computational foundations of linear algebra" , Moscow (1977) (In Russian)
[8] V.N. Faddeeva, et al., "Computational methods of linear algebra. Bibliographic index, 1828–1974" , Novosibirsk (1976) (In Russian)
[9] V.V. Voevodin, "Algèbre linéare" , MIR (1976) (Translated from Russian)
[10] G.I. Marchuk, Yu.A. Kuznetsov, "Iterative methods and quadratic functionals" , Novosibirsk (1972) (In Russian)

Comments

References

[a1] J.R. Westlake, "A handbook of numerical matrix inversion and solution of linear equations" , Wiley (1968) pp. Sect. 2.6
[a2] A.S. Householder, "Lectures on numerical algebra" , Math. Assoc. Amer. (1972)
[a3] A. Kiełbasiňski, H. Schwetlick, "Numerische lineare Algebra" , H. Deutsch (1988)
[a4] G.H. Golub, C.F. van Loan, "Matrix computations" , North Oxford Acad. (1983)
How to Cite This Entry:
Linear algebra, numerical methods in. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Linear_algebra,_numerical_methods_in&oldid=47647
This article was adapted from an original article by V.V. Voevodin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article