Namespaces
Variants
Actions

Accumulation of errors

From Encyclopedia of Mathematics
Jump to: navigation, search


in the numerical solution of algebraic equations

The overall effect of rounding-off at the various stages of a computation procedure on the accuracy of the computed solution to a system of algebraic equations. The most commonly employed technique for a priori estimation of the total effect of rounding-off errors in numerical methods of linear algebra is the scheme of inverse (or backward) analysis. Applied to the solution of linear algebraic equations

$$ \tag{1 } Ax = b $$

the scheme of inverse analysis is as follows. On the assumption that some direct method $ M $ has been used, the computed solution $ x _ {M} $ does not satisfy (1), but it can be expressed as the exact solution of a perturbed system

$$ \tag{2 } (A + F _ {M} ) x = b + k _ {M} . $$

The quality of the direct method is estimated in terms of the best a priori estimate that can be found for the norms of the matrix $ F _ {M} $ and the vector $ k _ {M} $. These "best" $ F _ {M} $ and $ k _ {M} $ are known as the equivalent perturbation matrix and vector, respectively, for the method $ M $.

If estimates for $ F _ {M} $ and $ k _ {M} $ are available, the error of the approximate solution $ x _ {M} $ can be estimated theoretically by the inequality

$$ \tag{3 } \frac{\| x - x _ {M} \| }{\| x \| } \leq \ \frac{ \mathop{\rm cond} (A) }{1 - \| A ^ {-1} \| \| F _ {M} \| } \left ( \frac{\| F _ {M} \| }{\| A \| } + \frac{\| k _ {M} \| }{\| b \| } \right ) . $$

Here $ \mathop{\rm cond} (A) = \| A \| \| A ^ {-1} \| $ is the condition number of the matrix $ A $, and the matrix norm in (3) is assumed to be subordinate to the vector norm $ \| \| $.

In reality, an estimate for $ \| A ^ {-1} \| $ is rarely known in advance, and the principal meaning of (2) is the possibility that it offers of comparing the merits of different methods. In the sequel some typical estimates for the matrix $ F _ {M} $ are presented.

For methods with orthogonal transformations and floating-point arithmetic ( $ A $ and $ b $ in the system (1) are assumed to be real)

$$ \tag{4 } \| F _ {M} \| _ {E} \leq \ f (n) \cdot \| A \| _ {E} \cdot \epsilon . $$

In this estimate, $ \epsilon $ is the relative precision of in the computer, $ \| A \| _ {E} = ( \sum a _ {ij} ^ {2} ) ^ {1/2} $ is the Euclidean matrix norm, and $ f (n) $ is a function of type $ Cn ^ {k} $, where $ n $ is the order of the system. The exact values of the constants $ C $ and the exponents $ k $ depend on such details of the computation procedure as the rounding-off method, the use made of accumulation of inner products, etc. Most frequently, $ k = 1 $ or $ 3/2 $.

In Gauss-type methods, the right-hand side of the estimate (4) involves yet another factor $ g (A) $, reflecting the possibility that the elements of $ A $ may increase at intermediate steps of the method in comparison with their initial level (no such increase occurs in orthogonal methods). In order to reduce $ g (A) $ one resorts to various ways of pivoting, thus putting bounds on the increase of the matrix elements.

In the square-root method (or Cholesky method), which is commonly used when the matrix $ A $ is positive definite, one has the sharper estimate

$$ \| F _ {M} \| _ {E} \leq \ C \| A \| _ {E} \cdot \epsilon . $$

There exist direct methods (the methods of Gordan, bordering and of conjugate gradients) for which a direct application of a scheme of inverse analysis does not yield effective estimates. In these cases different arguments are utilized to investigate the accumulation of errors (see [6][9]).

References

[1] W. Givens, "Numerical computation of the characteristic values of a real symmetric matrix" U.S. Atomic Energy Commission Reports. Ser. ORNL : 1574 (1954)
[2] J.H. Wilkinson, "Rounding errors in algebraic processes" , Prentice-Hall (1962)
[3] J.H. Wilkinson, "The algebraic eigenvalue problem" , Oxford Univ. Press (1969)
[4] V.V. Voevodin, "Rounding-off errors and stability in direct methods of linear algebra" , Moscow (1969) (In Russian)
[5] V.V. Voevodin, "Computational foundations of linear algebra" , Moscow (1977) (In Russian)
[6] G. Peters, J.H. Wilkinson, "On the stability of Gauss–Gordan elimination with pivoting" Commun. ACM , 18 : 1 (1975) pp. 20–24
[7] C.G. Broyden, "Error propagation in numerical processes" J. Inst. Math. and Appl. , 14 : 2 (1974) pp. 131–140
[8] J.K. Reid, , Large sparse sets of linear equations , London-New York (1971) pp. 231–254
[9] Kh.D. Ikramov, "The condition for intermediate matrices for Gauss, Gordan and optimal elimination methods" Zh. Vychisl. Mat. i Mat. Fiz. , 18 : 3 (1978) pp. 531–545 (In Russian)

Comments

Reference [3] above contains a detailed account of rounding-off error analysis (also called roundoff error analysis) of algebraic processes. In addition one can mention [a1], pp. 87-97, and [a5], pp. 69-82. A historical analysis of roundoff analysis is given in [a7]. More recent developments (interval analysis and automatic error analysis) can be found in [a2], [a3] and . A complete introduction to interval analysis is given in [a4].

References

[a1] G.E. Forsythe, C.B. Moler, "Computer solution of linear algebraic systems" , Prentice-Hall (1967)
[a2] J. Larsen, A. Sameh, "Efficient calculation of the effects of roundoff errors" ACM Trans. Math. Software , 4 (1978) pp. 228–236
[a3] W. Miller, D. Spooner, "Software for roundoff analysis II" ACM Trans. Math. Software , 4 (1978) pp. 369–390
[a4] G.W. Stewart, "Conjugate gradients methods for solving systems of linear equations" Numer. Math. , 21 (1973) pp. 284–297
[a5] J.H. Wilkinson, "Error analysis of direct methods of matrix inversion" J. Assoc. Comput. Machinery , 8 (1961) pp. 281–330
[a6] J.H. Wilkinson, "Modern error analysis" SIAM Review , 13 pp. 548–588
[a7] J.M. Yohe, "Software for interval arithmetic" ACM Trans. Math. Software , 5 (1979) pp. 50–63

Accumulation of rounding-off errors (or of errors) occurs when one is solving problems in which the solution is the result of a large number of consecutively performed arithmetic operations.

A significant proportion of such problems arises in the solution of linear or non-linear algebraic problems (see above). In turn, one is most commonly concerned with algebraic problems that arise from the approximation of differential equations. These problems display certain specific features.

Errors accumulate in accordance with the same — or even simpler — laws as those governing the accumulation of computational errors; they may be investigated when analyzing a method for the solution of a problem.

There are two different approaches to investigating the accumulation of computational errors. In the first case one assumes that the computational errors at each step are introduced in the most unfavourable way and so obtains a majorizing estimate for the error. In the second case, one assumes that the errors are random and obey a certain distribution law.

The nature of the accumulation of errors depends on the problem being solved, the method of solving and a variety of other factors that appear at first sight to be rather inessential: such as the type of computer arithmetic (fixed-point or floating-point), the order in which the arithmetic operations are performed, etc. For example, in computing the sum of $ N $ numbers

$$ A _ {N} = a _ {1} + \dots + a _ {N} $$

the order of the operations is significant. Suppose that the computation is being done in a computer with floating-point arithmetic with $ t $ binary digits, all numbers lying in the interval $ 1/2 < | a _ {n} | \leq 1 $. In a direct computation of $ A _ {N} $ via the recurrence formula

$$ A _ {n + 1 } = A _ {n} + a _ {n} ,\ \ n = 1, \dots, N - 1, $$

the majorizing error estimate is of the order $ 2 ^ {-t} N $. But one can proceed otherwise (see [1]). First compute sums of pairs, $ A _ {k} ^ {1} = a _ {2k - 1 } + a _ {2k} $ (if $ N = 2l + 1 $ is odd, one puts $ A _ {l + 1 } ^ {1} = a _ {2l + 1 } $). Then compute further sums of pairs, $ A _ {k} ^ {2} = A _ {2k - 1 } ^ {1} + A _ {2k} ^ {1} $, etc. If $ 2 ^ {m - 1 } < N \leq 2 ^ {m} $, then $ m $ steps of pairwise addition using the formulas

$$ A _ {k} ^ {q} = \ A _ {2k - 1 } ^ {q - 1 } + A _ {2k} ^ {q - 1 } ,\ \ A _ {k} ^ {0} \equiv a _ {k} , $$

yield $ A _ {1} ^ {m} = A _ {N} $; the majorizing error estimate is of the order $ 2 ^ {-1} \mathop{\rm log} _ {2} N $.

In typical problems, the numbers $ a _ {m} $ are computed according to formulas, in particular recurrence formulas, or appear consecutively in the operating memory of the computer; in such cases use of the scheme just described increases the load on the computer memory. However, the sequence of computations can be so organized that the load on the operating memory is never more than $ \sim \mathop{\rm log} _ {2} N $ cells.

In the numerical solution of differential equations one may encounter the following cases. As the grid spacing $ h $ tends to zero, the error increases as $ (a (h)) ^ {h ^ {-q} } $, where $ q > 0 $, while $ \overline{\lim\limits}\; _ {h \rightarrow 0 } | a (h) | > 1 $. Such methods of solution fall into the category of unstable methods. They are of little practical interest.

In stable methods the error will characteristically increase at the rate $ A (h) h ^ {-q} $, where $ \overline{\lim\limits}\; _ {h \rightarrow 0 } | A (h) | < \infty $. The error in such methods is usually estimated in the following way. One constructs an equation for the perturbation, whether due to the introduction of rounding-off errors or other errors, and then investigates the solution of this equation (see [2], [3]).

In more complicated cases, one uses the method of equivalent perturbations (see [1], [4]), developed in connection with the investigation of the accumulation of computational errors in the solution of differential equations (see [3], [5], [6]). Computations according to some standard scheme with rounding-off are treated as computations without rounding-off but for equations with perturbed coefficients. Comparing the solution of the original difference equation with the solution of the equation with perturbed coefficients, one obtains an estimate for the error.

Considerable care is taken to select a method in which $ q $ and $ A (h) $ are as small as possible. When a fixed method of solution is being used, the standard formulas can usually be transformed in such a way that $ q \leq 1 $ (see [3], [5]). This is particularly important in the case of ordinary differential equations, where the number of steps in the various cases turns out to be very high.

The magnitude $ A (h) $ can increase strongly as the integration interval is enlarged. The goal is therefore to use methods with $ A (h) $ as small as possible. When dealing with Cauchy problems, the rounding-off error at each specific step can be regarded, relative to the previous step, as an error in initial data. A lower bound for $ A (h) $ is therefore dependent on the behaviour of the difference between slightly different solutions of the differential equation, as defined by the small-disturbance equation.

In the numerical solution of an ordinary differential equation $ y ^ \prime = f (x, y) $, the small-disturbance equation has the form

$$ \eta ^ \prime = f _ {y} (x, y) \eta + S, $$

and so, when solving the problem on an interval $ (x _ {0} , X) $, one cannot reckon on the constant $ A (h) $ in a majorizing estimate for the computational error being essentially better than

$$ \int\limits _ {x _ {0} } ^ { X } e ^ {\int\limits _ { x } ^ { X } f _ {y} (t, y (t)) d t } dx. $$

When solving this problem, therefore, the most commonly used methods are one-step methods of the Runge–Kutta or Adams type (see [3], [7]), where the accumulation of errors is basically determined by the solution of the small-disturbance equation.

In many methods, the principal term of the error increases according to a similar law, while the computational error builds up significantly more rapidly (see [3]). The domain of practical applicability for such methods is substantially narrower.

The accumulation of computational errors essentially depends on the method employed to solve the grid problem. For example, when solving grid boundary value problems corresponding to ordinary differential equations by the shooting or double-sweep method, the accumulation of errors behaves like $ A (h) h ^ {-q} $, where $ q $ is the same. The magnitudes $ A (h) $ in these methods may differ to such an extent that in a certain situation one of the methods becomes useless. In the shooting method for a grid boundary value problem, the accumulation of errors behaves like $ c ^ {1/h} $, $ c > 0 $, while for the double-sweep method this is $ Ah ^ {-q} $. If one adopts a probabilistic approach to the investigation of the accumulation of errors, there are cases in which one assumes a priori a certain distribution law for the errors (see [2]); in others, one introduces a measure on the space of problems under consideration and, on the basis of this measure, derives a distribution law for the rounding-off errors (see [8], [9]).

When measuring the accuracy of the solution of a problem, the majorizing and probabilistic approaches to the estimation of the accumulation of computational errors usually yield qualitatively identical results: The errors either accumulate in both cases within admissible bounds, or they exceed them in either case.

References

[1] V.V. Voevodin, "Computational foundations of linear algebra" , Moscow (1977) (In Russian)
[2] M.R. Shura-Bura, Prikl. Mat. i Mekh. , 16 : 5 (1952) pp. 575–588
[3] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[4] J.H. Wilkinson, "The algebraic eigenvalue problem" , Oxford Univ. Press (1969)
[5] N.S. Bakhvalov, , Computational methods and programming , 1 , Moscow (1962) pp. 69–79 (In Russian)
[6] S.K. Godunov, V.S. Ryaben'kii, "The theory of difference schemes" , North-Holland (1964) (Translated from Russian)
[7] N.S. Bakhvalov, "On an estimate of the error at numerical integration of differential equations by Adams' extrapolation method" Dokl. Akad. Nauk SSSR , 104 : 5 (1955) pp. 683–686 (In Russian)
[8] N.S. Bakhvalov, "Optimal convergence bounds for quadratic processes and integral methods of Monte Carlo type for classes of functions" Zh. Vychisl. Mat. i Mat. Fiz. , 4 : 3 (1964) pp. 399–404 (In Russian)
[9] E.A. Lapshin, "A statistical investigation of roundoff errors in the numerical solution of an ordinary differential equation" Zh. Vychisl. Mat. i Mat. Fiz. , 11 : 6 (1971) pp. 1425–1436 (In Russian)

N.S. Bakhvalov

Comments

Statistical treatment of rounding-off (or roundoff) errors can be found in the references below. It should be remarked that statistical analyses give more realistic bounds, but are much more difficult to compute.

References

[a1] P. Henrici, "Discrete variable methods in ordinary differential equations" , Wiley (1962)
[a2] P. Henrici, "Error propagation for difference methods" , Wiley (1963)
[a3] T.E. Hull, J.R. Swenson, "Tests of probabilistic models for propagation of roundoff errors" Commun. ACM , 9 (1966) pp. 108–113
How to Cite This Entry:
Accumulation of errors. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Accumulation_of_errors&oldid=52366
This article was adapted from an original article by Kh.D. Ikramov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article