Namespaces
Variants
Actions

Cauchy problem, numerical methods for ordinary differential equations

From Encyclopedia of Mathematics
Jump to: navigation, search

A Cauchy problem is a problem of determining a function (or several functions) satisfying a differential equation (or system of differential equations) and assuming given values at some fixed point. Let

be vector-functions defined and continuous, respectively, on a closed interval and in a closed domain , where is some norm in the finite-dimensional space . With this notation, one writes the Cauchy problem for a system of first-order ordinary differential equations as follows:

(1)

By a suitable choice of new unknown functions one can reduce the Cauchy problem for any system of ordinary differential equations (of arbitrary order) to this form.

Problem (1) has a solution if the function is continuous in . A sufficient condition for the solution to be unique is that the Osgood condition:

(2)

where is a function such that

or the stronger Lipschitz condition:

(3)

hold. The number is called the Lipschitz constant. If is continuously differentiable with respect to , a possible value for the Lipschitz constant is

(4)

In various cases, where the Lipschitz constant (4) is too large, a successful solution of the Cauchy problem by numerical methods requires special numerical techniques, despite of the fact that, theoretically speaking, the problem is uniquely solvable. This happens, in particular, if the eigen values of the matrix are "widely scattered" , i.e. the largest eigen value is hundreds — or even thousands — of times larger than the smallest. Such systems of differential equations are called stiff systems, and the corresponding problems are stiff Cauchy problems. One "source" of stiff systems is the reduction of a partial differential equation to a system of ordinary differential equations, e.g. via the method of lines.

Numerical methods for ordinary differential equations normally consist of one or more formulas defining relations for the function to be found at a discrete sequence of points , the set of which is called a grid. The foundations of numerical methods in general, and of those for differential equations in particular, were laid by y of an elastic system','../s/s087010.htm','Siegel disc','../s/s110120.htm','Theta-function','../t/t092600.htm','Trigonometric series','../t/t094240.htm','Two-term congruence','../t/t094620.htm','Umbral calculus','../u/u095050.htm','Variation of constants','../v/v096160.htm','Variational calculus','../v/v096190.htm','Variational calculus, numerical methods of','../v/v096210.htm','Venn diagram','../v/v096550.htm','Zeta-function','../z/z099260.htm')" style="background-color:yellow;">L. Euler. One of the simplest methods for solving the Cauchy problem is named after him. This method is as follows. Expand the solution of problem (1) in a Taylor series about the point :

If is small in absolute value, then, dropping terms of order and higher, one obtains an approximate equality

The approximate solution may now be evaluated at the point by the formula

This is Euler's method.

Numerical methods were subsequently improved to a considerable degree. This development led in two main directions: methods later known as Runge–Kutta methods (cf. Runge–Kutta method), and linear multi-step methods, the most important of which is the Adams method.

One of the advantages of Runge–Kutta methods is that the algorithms that they produce are uniform, i.e. they remain unchanged upon passage from one grid point to another. In addition, the elementary step length of integration can be modified in Runge–Kutta methods, according to the required computational accuracy, without significantly complicating the algorithm itself (see Kutta–Merson method; Runge rule). Fairly reliable two-sided methods have been developed on the basis of these methods. A major shortcoming is that, in order to compute an approximate solution at one point of the grid, one has to evaluate several values of the (right-hand side) term of the differential equation (1). This implies — particularly in equations with complicated right-hand side terms — a considerable increase in computation time.

In linear multi-step methods, including the Adams method, one has to compute the right-hand side term at one grid point only. This is the main advantage of these methods. However, in order to begin computing with a linear multi-step formula, one first has to calculate additional "initial values" . Because of this, the algorithm is not self-starting: The first few values must be computed using other formulas. A more serious shortcoming of linear multi-step methods is that the step length of integration cannot be simply altered: One must use grids of constant step length.

Linear multi-step methods have provided the basis for the development of what are known as predictor-corrector methods, comprising a pair of linear multi-step formulas, one of which (the predictor) is usually explicit, while the other (the corrector) is implicit; e.g. the predictor might be

and the corrector

where

Predictor-corrector methods are successfully applied in the solution of stiff systems of ordinary differential equations.

Irrespective of the fact that higher-order differential equations are formally reducible to systems of first-order equations, methods adapted to the actual form of the differential equation are sometimes much more effective. In this connection, linear multi-step methods have been developed which utilize higher-order derivatives, such as the Störmer method.

References

[1] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian)
[2] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[3] J.M. Watt (ed.) , Modern numerical methods for ordinary differential equations , Clarendon Press (1976)


Comments

In the last set of formulas in the article the predictor is the -step Adams–Bashforth method and the corrector is the trapezoidal rule.

In the English literature on mathematics, the Cauchy problem, both for ordinary equations and time-dependent partial differential equations, is usually termed initial-value problem.

It seems that in [a2] stiff ordinary differential equations were first recognized as equations requiring special integration methods, that is, methods which are (more or less) unconditionally stable. The linear multi-step methods based on backward numerical differentiation formulas proposed in [a2] are still considered as one of the most efficient families of integration methods for stiff problems, particularly by the work of C.W. Gear (cf. [a6]). For non-stiff problems, however, that is not true: for such problems and within the class of linear multi-step methods, the Adams methods are still considered as the most useful ones (cf. [a8]).

As described in [a7], the method of lines can be traced back to the early work of J.L. Lagrange [a9], J. Fourier [a17] and E. Rothe [a18], and is now a powerful in attacking initial-value problems for time-dependent partial differential equations.

The original reference for Euler's method is [a5]. In spite of its simplicity and its proof behaviour as a numerical integration method, it facilitates comparisons with more complicated methods and can be used to illustrate a number of important phenomena (cf. [a8]).

In addition to Runge–Kutta and linear multi-step methods, Taylor series methods may be considered as a third main direction of development. These three main classes of numerical ordinary differential equations solvers have, respectively, the characteristic of using more right-hand side evaluations, more back values and more derivatives. A variety of new methods can be constructed which are combinations of methods from the three main classes. An important class of such new methods are the multi-stage multi-step methods; they carry various names such as hybrid methods (cf. [a1]), or simply multi-step Runge–Kutta methods. A second class is formed by the multi-step-multi-derivative methods, the so-called Obreshkov methods [a15].

The Kutta–Merson method [a11] belongs to the family of so-called imbedded Runge–Kutta methods. These methods provide in each step an estimate of the local error. However, since the relation between the true (global) error and the local error is generally not known, and because the local error estimate may be conservative, imbedded methods do not provide reliable information on the error of the solution, so that it is not clear how they could be used as a starting point for constructing reliable two-sided methods. Nevertheless, these methods are extremely useful because they provide a tool by which the accuracy of the calculations can be monitored during the integration process. It should be mentioned that the recently developed imbedded method of J.R. Dormand and P.J. Prince [a4] is considerably better than the Merson method (see [a7]).

The disadvantage of multi-step methods of having coefficients which depend on the step size does not necessarily mean that one must use grids of constant step length. It is quite possible to derive general expressions for the coefficients in terms of arbitrary grid point positions. However, there is a more elegant alternative which is furnished by a so-called Nordsieck method [a14]. This method stores at each grid point the information needed for the computation of the numerical solution in a form independent of the particular step length used. Since every Nordsieck method is equivalent with a linear multi-step method (see, e.g., [a7]), it can be used as a method of implementing linear multi-step methods (see [a6]).

Predictor-corrector methods are fixed-point iteration schemes solving implicit linear multi-step equations. These methods were introduced in [a12] and [a13]. Because of convergence reasons, they are not suitable for integrating stiff systems. In the case of stiff systems, one should use Newton-type iteration together with a sufficiently stable implicit ordinary differential equation method such as the implicit backward differentiation methods.

An excellent text book for the whole spectrum of both stiff and non-stiff ordinary differential equations solvers is provided by [a7] (Vol. II, Stiff problems, to appear) and the less recent [a10]. An up-to-date account of Runge–Kutta methods for stiff equations and the non-linear stability analysis of these methods is given in the monograph [a3]. More advanced text books on ordinary differential equations are [a8], [a16] and the recent book of J.C. Butcher [a1] with over 2000 references providing an almost complete bibliography until 1984 for the field of numerical analysis of ordinary differential equations.

References

[a1] J.C. Butcher, "The numerical analysis of ordinary differential equations. Runge–Kutta and general linear methods" , Wiley (1987)
[a2] C.F. Curtiss, J.O. Hirschfelder, "Integration of stiff equations" Proc. Nat. Acad. Sci. USA , 38 (1952) pp. 235–243
[a3] K. Dekker, J.G. Verwer, "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , CWI Monographs , 2 , North-Holland (1984)
[a4] J.R. Dormand, P.J. Prince, "A family of embedded Runge–Kutta formulae" J. Comp. Appl. Math. , 6 (19–26)
[a5] L. Euler, "Institutionum calculi integralis. Volumen Secundum (1769)" G. Kowalewski (ed.) , Opera Omnia Ser. 1; opera mat. , 12 (1914)
[a6] C.W. Cear, "Numerical initial value problems in ordinary differential equations" , Prentice-Hall (1973)
[a7] E. Hairer, S.P. Nørsett, G. Wanner, "Solving ordinary differential equations" , I. Nonstiff problems , Springer (1987)
[a8] P. Henrici, "Discrete variable methods in ordinary differential equations" , Wiley (1962)
[a9] J.L. Lagrange, "Solutions des problèmes en calcule" , Oeuvres , 1 pp. 471–668
[a10] J.D. Lambert, "Computational methods in ordinary differential equations" , Wiley (1973)
[a11] R.H. Merson, "An operational method for the study of integration processes" , Proc. Symp. Data Processing , Weapons Res. Establ. Salisbury , Salisbury (1957) pp. 110–125
[a12] W.E. Milne, "Numerical integration of ordinary differential equations" Amer. Math. Monthly , 33 (1926) pp. 455–460
[a13] F.R. Moulton, "New methods in exterior ballistics" , Univ. Chicago Press (1926)
[a14] A. Nordsieck, "On numerical integration of ordinary differential equations" Math. Comp. , 16 (1962) pp. 22–49
[a15] N. [N. Obreshkov] Obrechkoff, "Neue Kwadraturformuln" Abh. Preuss. Akad. Wissenschaft. Math. Nat. Kl. , 4 (1940)
[a16] H.J. Stetter, "Analysis of discretization methods for ordinary differential equations" , Springer (1973)
[a17] J.B.J. Fourier, "Théorie analytique de la chaleur" , Paris (1822)
[a18] E. Rothe, "Two-dimensional parabolic boundary-value problems as special case of one-dimensional boundary-value problems" Math. Ann. , 102 (1930) pp. 650–670
How to Cite This Entry:
Cauchy problem, numerical methods for ordinary differential equations. V.V. Pospelov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Cauchy_problem,_numerical_methods_for_ordinary_differential_equations&oldid=18901
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098