Namespaces
Variants
Actions

Difference between revisions of "Kutta-Merson method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Category:Numerical analysis and scientific computing)
m (correction)
 
Line 5: Line 5:
  
 
the method is as follows:
 
the method is as follows:
\begin{equation}\label{eq2}\tag{2}
+
\begin{equation}\label{2}\tag{2}
 
\begin{gathered}
 
\begin{gathered}
 
k_1 = h f(x_0,y_0), \quad y_0 = y(x_0),\\
 
k_1 = h f(x_0,y_0), \quad y_0 = y(x_0),\\
Line 16: Line 16:
 
\end{gathered}
 
\end{gathered}
 
\end{equation}
 
\end{equation}
The number $R=0.2|y^1-y^2|$ serves as an estimate of the error and is used for automatic selection of the integration step. If $\epsilon$ is the prescribed accuracy of the computation, the integration step is selected as follows. First choose some initial step and start the computation by \eqref{eq2} to obtain the number $R$. If $R>\epsilon$, divide the integration step by 2; if $R\leq\epsilon/64$, double it. If $\epsilon/64<R\leq\epsilon$, the selected integration step is satisfactory. Now replace the initial point $x_0$ by $x_0+h$ and repeat the entire procedure. This yields an approximate solution $y^2$; the quantity $y^1$ is mainly auxiliary.
+
The number $R=0.2|y^1-y^2|$ serves as an estimate of the error and is used for automatic selection of the integration step. If $\epsilon$ is the prescribed accuracy of the computation, the integration step is selected as follows. First choose some initial step and start the computation by \eqref{2} to obtain the number $R$. If $R>\epsilon$, divide the integration step by 2; if $R\leq\epsilon/64$, double it. If $\epsilon/64<R\leq\epsilon$, the selected integration step is satisfactory. Now replace the initial point $x_0$ by $x_0+h$ and repeat the entire procedure. This yields an approximate solution $y^2$; the quantity $y^1$ is mainly auxiliary.
  
 
Since
 
Since

Latest revision as of 17:13, 14 February 2020

A five-stage Runge–Kutta method with fourth-order accuracy. Applied to the Cauchy problem

$$y'(x)=f(x,y(x)),\quad x_0\leq x\leq X,\quad y(x_0)=y_0,\tag{1}$$

the method is as follows: \begin{equation}\label{2}\tag{2} \begin{gathered} k_1 = h f(x_0,y_0), \quad y_0 = y(x_0),\\ k_2 = h f \big( x_0 + \tfrac13 h, y_0 + \tfrac13 k_1 \big),\\ k_3 = h f \big( x_0 + \tfrac13 h, y_0 + \tfrac16 k_1 + \tfrac16 k_2 \big),\\ k_4 = h f \big( x_0 + \tfrac12 h, y_0 + \tfrac18 k_1 + \tfrac38 k_3 \big),\\ k_5 = h f \big( x_0 + h, y_0 + \tfrac12 k_1 - \tfrac32 k_3 + 2 k_4 \big),\\ y^1(x_0+h) = y_0 + \tfrac12 k_1 - \tfrac32 k_3 + 2k_4,\\ y^2(x_0+h) = y_0 + \tfrac16 k_1 + \tfrac23 k_4 + \tfrac16 k_5. \end{gathered} \end{equation} The number $R=0.2|y^1-y^2|$ serves as an estimate of the error and is used for automatic selection of the integration step. If $\epsilon$ is the prescribed accuracy of the computation, the integration step is selected as follows. First choose some initial step and start the computation by \eqref{2} to obtain the number $R$. If $R>\epsilon$, divide the integration step by 2; if $R\leq\epsilon/64$, double it. If $\epsilon/64<R\leq\epsilon$, the selected integration step is satisfactory. Now replace the initial point $x_0$ by $x_0+h$ and repeat the entire procedure. This yields an approximate solution $y^2$; the quantity $y^1$ is mainly auxiliary.

Since

$$y^2(x_0+h)=y_0+\frac16k_1+\frac23k_4+\frac16hf(x_0+h,y^1(x_0+h)),$$

i.e. the formula for $y^1$ is as it were "nested" in the formula for $y^2$, the method described here for the estimation of the error and the selection of the integration step is known as an imbedded Runge–Kutta method.

Standard programs for the Kutta–Merson method are available in Algol [1], [2].

References

[1] J. Christiansen, "Numerical solution of ordinary simultaneous differential equations of the 1st order using a method for automatic step change" Numer. Math. , 14 (1970) pp. 317–324
[2] P.M. Lukehart, "Algorithm 218. Kutta Merson" Comm. Assoc. Comput. Mach. , 6 : 12 (1963) pp. 737–738
[3] L. Fox, "Numerical solution of ordinary and partial differential equations" , Pergamon (1962)
[4] G.N. Lance, "Numerical methods for high speed computers" , Iliffe (1960)


Comments

Often Runge's name is added: Runge–Kutta–Merson method.

The (Runge–) Kutta–Merson method is due to R.H. Merson [a6]. The order of the numerical approximation defined by $y^2$ is four and that of the auxiliary (reference) solution $y^1$ is three. Hence, in general, the difference of these two numerical approximations is only of order three, so that a conservative error estimate results (i.e., it overestimates the local error $y(x_0+h)-y^2$ for small $h$). However, for linear equations with constant coefficients, a correct estimate of the local error is obtained as $h\to0$. This can be shown by observing that for linear equations

$$y^1=\left[1+h\frac{d}{dx}+\frac{\left(h\frac{d}{dx}\right)^2}{2}+\frac{\left(h\frac{d}{dx}\right)^3}{6}+\frac{\left(h\frac{d}{dx}\right)^4}{24}\right]y(x_0),$$

$$y^2=y^1+\frac{\left(h\frac{d}{dx}\right)^5}{144}y(x_0),$$

$$y(x_0+h)=y^1+\left[\frac{\left(h\frac{d}{dx}\right)^5}{120}+O(h^6)\right]y(x_0).$$

Thus,

$$R=0.2|y^1-y^2|=\left[\frac{\left(h\frac{d}{dx}\right)^5}{720}\right]y(x_0)$$

and

$$y(x_0+h)-y^2=\left[\frac{\left(h\frac{d}{dx}\right)^5}{720}+O(h^6)\right]y(x_0),$$

so that $R$ equals the local error $y(x_0+h)-y^2$ within order $h^6$. A further discussion of this method may be found in [a1] and [a5]. A Fortran code of the Kutta–Merson method is available in the NAG library. The Kutta–Merson method is the earliest proposed method belonging to the family of imbedded methods. Higher-order imbedded formulas providing asymptotically-correct approximations to the local error have been derived by E. Fehlberg [a3], [a4]; they have the additional feature that the error constants of the main formula (the formula of highest order) are minimized. However, since the relation between the true (global) error and the local error is generally not known, it is questionable whether one should use the highest-order formula as the main formula, and modern insights advocate to interchange the roles of the main formula and the reference formula (see [a5]). The recently developed imbedded method of J.R. Dormand and P.J. Prince [a2], which combines an eighth-order formula with seventh-order reference formula, is considered as one of the most efficient high-accuracy methods nowadays available [a5].

References

[a1] J.C. Butcher, "The numerical analysis of ordinary differential equations. Runge–Kutta and general linear methods" , Wiley (1987)
[a2] J.R. Dormand, P.J. Prince, "A family of embedded Runge–Kutta formulae" J. Comp. Appl. Math. , 6 (1980) pp. 19–26
[a3] E. Fehlberg, "Classical fifth-, sixth-, seventh-, and eighth-order Runge–Kutta formulas with stepsize control" NASA Techn. Rep. , 287 (Abstract in: Computing (1969), 93–106)
[a4] E. Fehlberg, "Low-order classical Runge–Kutta formulas with stepsize control and their application to some heat transfer problems" NASA Techn. Rep. , 315 (Abstract in: Computing (1969), 61–71)
[a5] E. Hairer, S.P. Nørsett, G. Wanner, "Solving ordinary differential equations" , I. Nonstiff problems , Springer (1987)
[a6] R.H. Merson, "An operational method for the study of integration processes" , Proc. Symp. Data Processing , Weapons Res. Establ. Salisbury , Salisbury (1957) pp. 110–125
[a7] S.O. Fatunla, "Numerical methods for initial value problems in ordinary differential equations" , Acad. Press (1988)
How to Cite This Entry:
Kutta-Merson method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Kutta-Merson_method&oldid=44750
This article was adapted from an original article by V.V. Pospelov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article