Namespaces
Variants
Actions

Difference between revisions of "Non-linear stability of numerical methods"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (Automatically changed introduction)
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
Numerical stability theory for the initial value problem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n1201001.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n1201002.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n1201003.png" />, is concerned with the question of whether the numerical discretization inherits the dynamic properties of the differential equation. Stability concepts are usually based on structural assumptions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n1201004.png" />. For non-linear problems, the breakthrough was achieved by G. Dahlquist in his seminal paper [[#References|[a3]]]. There, he studied multi-step discretizations of problems satisfying a one-sided [[Lipschitz condition|Lipschitz condition]]. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n1201005.png" /> denote an [[Inner product|inner product]] on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n1201006.png" /> and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n1201007.png" /> be the induced [[Norm|norm]].
+
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct and if all png images have been replaced by TeX code, please remove this message and the {{TEX|semi-auto}} category.
 +
 
 +
Out of 61 formulas, 59 were replaced by TEX code.-->
 +
 
 +
{{TEX|semi-auto}}{{TEX|part}}
 +
Numerical stability theory for the initial value problem $y ^ { \prime } = f ( x , y )$, $y ( 0 ) = y _ { 0 }$, where $f : \mathbf{R} \times \mathbf{C} ^ { n } \rightarrow \mathbf{C} ^ { n }$, is concerned with the question of whether the numerical discretization inherits the dynamic properties of the differential equation. Stability concepts are usually based on structural assumptions on $f$. For non-linear problems, the breakthrough was achieved by G. Dahlquist in his seminal paper [[#References|[a3]]]. There, he studied multi-step discretizations of problems satisfying a one-sided [[Lipschitz condition|Lipschitz condition]]. Let $\langle \, .\, ,\,  . \, \rangle$ denote an [[Inner product|inner product]] on $\mathbf{C} ^ { n }$ and let $\| .\|$ be the induced [[Norm|norm]].
  
 
==Runge–Kutta methods.==
 
==Runge–Kutta methods.==
An <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n1201009.png" />-stage Runge–Kutta discretization of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010010.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010011.png" /> is given by
+
An $s$-stage Runge–Kutta discretization of $y ^ { \prime } = f ( x , y )$, $y ( 0 ) = y _ { 0 }$ is given by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010012.png" /></td> </tr></table>
+
<table class="eq" style="width:100%;"> <tr><td style="width:94%;text-align:center;" valign="top"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010012.png"/></td> </tr></table>
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010013.png" /></td> </tr></table>
+
\begin{equation*} y _ { 1 } = y _ { 0 } + h \sum _ { i = 1 } ^ { s } b _ { i }\, f ( x _ { 0 } + c _ { i } h , g _ { i } ). \end{equation*}
  
Here, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010014.png" /> denotes the step-size and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010015.png" /> is the Runge–Kutta approximation to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010016.png" />. (For a thorough discussion of such methods, see [[#References|[a1]]], [[#References|[a2]]], and [[#References|[a4]]]; see also [[Runge–Kutta method|Runge–Kutta method]].)
+
Here, $h$ denotes the step-size and $y_1$ is the Runge–Kutta approximation to $y ( x _ { 0 } + h )$. (For a thorough discussion of such methods, see [[#References|[a1]]], [[#References|[a2]]], and [[#References|[a4]]]; see also [[Runge–Kutta method|Runge–Kutta method]].)
  
 
===B-stability.===
 
===B-stability.===
 
If the problem satisfies the global contractivity condition
 
If the problem satisfies the global contractivity condition
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010017.png" /></td> </tr></table>
+
\begin{equation*} \operatorname { Re } \langle f ( x , y ) - f ( x , z ) , y - z \rangle \leq 0 , y , z \in \mathbf{C} ^ { n }, \end{equation*}
  
then the difference of two solutions is a non-increasing function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010018.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010019.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010020.png" /> denote the numerical solutions after one step of size <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010021.png" /> with initial values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010022.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010023.png" />, respectively. A Runge–Kutta method is called B-stable (or sometimes BN-stable), if the contractivity condition implies <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010024.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010025.png" />. Examples of B-stable Runge–Kutta methods are given below. The definition of B-stability extends to arbitrary one-step methods in an obvious way.
+
then the difference of two solutions is a non-increasing function of $x$. Let $y_1$, $z_1$ denote the numerical solutions after one step of size $h$ with initial values $y _ { 0 }$, $z_0$, respectively. A Runge–Kutta method is called B-stable (or sometimes BN-stable), if the contractivity condition implies $\| y _ { 1 } - z _ { 1 } \| \leq \| y _ { 0 } - z _ { 0 } \|$ for all $h &gt; 0$. Examples of B-stable Runge–Kutta methods are given below. The definition of B-stability extends to arbitrary one-step methods in an obvious way.
  
 
===Algebraic stability.===
 
===Algebraic stability.===
 
A Runge–Kutta method is called algebraically stable if its coefficients satisfy
 
A Runge–Kutta method is called algebraically stable if its coefficients satisfy
  
i) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010026.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010027.png" />;
+
i) $b _ { i } \geq 0$, $i = 1 , \dots , s$;
  
ii) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010028.png" /> is positive semi-definite. Algebraic stability plays an important role in the theory of B-convergence. Note that algebraic stability implies B-stability. For non-confluent methods, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010029.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010030.png" />, both concepts are equivalent. The following families of implicit Runge–Kutta methods are algebraically stable and therefore B-stable: Gauss, RadauIA, RadauIIA, LobattoIIIC.
+
ii) $( b _ { i } a _ { i j } + b _ { j } a _ { j i } - b _ { i } b _ { j } ) _ { i , j = 1 } ^ { s }$ is positive semi-definite. Algebraic stability plays an important role in the theory of B-convergence. Note that algebraic stability implies B-stability. For non-confluent methods, i.e. $c _ { i } \neq c _ { j }$ for $i \neq j$, both concepts are equivalent. The following families of implicit Runge–Kutta methods are algebraically stable and therefore B-stable: Gauss, RadauIA, RadauIIA, LobattoIIIC.
  
 
===Error growth function.===
 
===Error growth function.===
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010031.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010032.png" />) denote the class of all problems satisfying the one-sided Lipschitz condition
+
Let $\mathcal{F} _ { \nu }$ ($\nu \in \mathbf{R}$) denote the class of all problems satisfying the one-sided Lipschitz condition
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010033.png" /></td> </tr></table>
+
\begin{equation*} \operatorname { Re } \langle f ( x , y ) - f ( x , z ) , y - z \rangle \leq \nu \| y - z \| ^ { 2 } , y , z \in \mathbf{C} ^ { n }. \end{equation*}
  
For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010034.png" />, this condition is weaker than contractivity and allows trajectories to expand with increasing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010035.png" />. For any given real number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010036.png" /> and step-size <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010037.png" />, set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010038.png" /> and denote by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010039.png" /> the smallest number for which the estimate <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010040.png" /> holds for all problems in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010041.png" />. The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010042.png" /> is called error growth function. For B-stable Runge–Kutta methods, the error growth function is superexponential, i.e. satisfies <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010043.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010044.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010045.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010046.png" /> having the same sign. This result can be used in the asymptotic stability analysis of Runge–Kutta methods, see [[#References|[a5]]].
+
For $\nu &gt; 0$, this condition is weaker than contractivity and allows trajectories to expand with increasing $x$. For any given real number $\xi $ and step-size $h &gt; 0$, set $\nu = \xi / h$ and denote by $\varphi ( \xi )$ the smallest number for which the estimate $\| y _ { 1 } - z _ { 1 } \| \leq \varphi ( \xi ) \| y _ { 0 } - z _ { 0 } \|$ holds for all problems in $\mathcal{F} _ { \nu }$. The function $\varphi$ is called error growth function. For B-stable Runge–Kutta methods, the error growth function is superexponential, i.e. satisfies $\varphi ( 0 ) = 1$ and $\varphi ( \xi _ { 1 } ) \varphi ( \xi _ { 2 } ) \leq \varphi ( \xi _ { 1 } + \xi _ { 2 } )$ for all $\xi _ { 1 }$, $\xi_2$ having the same sign. This result can be used in the asymptotic stability analysis of Runge–Kutta methods, see [[#References|[a5]]].
  
 
==Linear multi-step methods.==
 
==Linear multi-step methods.==
A linear multi-step discretization of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010047.png" /> is given by
+
A linear multi-step discretization of $y ^ { \prime } = f ( x , y )$ is given by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010048.png" /></td> </tr></table>
+
\begin{equation*} \sum _ { i = 0 } ^ { k } \alpha _ { i } y _ { m + i } = h \sum _ { i = 0 } ^ { k } \beta _ { i } \,f ( x _ { m  + i} , y _ { m + i } ). \end{equation*}
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010049.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010050.png" /> be the generating polynomials. Using the normalization <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010051.png" />, the associated one-leg method is defined by
+
Let $\rho ( \zeta ) = \sum _ { i = 0 } ^ { k } \alpha _ { i } \zeta ^ { i }$ and $\sigma ( \zeta ) = \sum _ { i = 0 } ^ { k } \beta _ { i } \zeta ^ { i }$ be the generating polynomials. Using the normalization $\sigma ( 1 ) = 1$, the associated one-leg method is defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010052.png" /></td> </tr></table>
+
\begin{equation*} \sum _ { i = 0 } ^ { k } \alpha _ { i } y _ { m + i } = h f \left( \sum _ { i = 0 } ^ { k } \beta _ { i } x _ { m + i } , \sum _ { i = 0 } ^ { k } \beta _ { i } y _ { m + i } \right). \end{equation*}
  
(For a thorough discussion, see [[#References|[a4]]].) A one-leg method is called G-stable if there exists a real symmetric positive-definite <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010053.png" />-dimensional matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010054.png" /> such that any two numerical solutions satisfy <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010055.png" /> for all step-sizes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010056.png" />, whenever the problem is contractive (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010057.png" />). Here, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010058.png" /> and
+
(For a thorough discussion, see [[#References|[a4]]].) A one-leg method is called G-stable if there exists a real symmetric positive-definite $k$-dimensional matrix $G$ such that any two numerical solutions satisfy $\| Y _ { 1 } - Z _ { 1 } \| _ { G } \leq \| Y _ { 0 } - Z _ { 0 } \| _ { G }$ for all step-sizes $h &gt; 0$, whenever the problem is contractive ($\nu = 0$). Here, $Y _ { m } = ( y _ { m  + k - 1} , \ldots , y _ { m } ) ^ { T }$ and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010059.png" /></td> </tr></table>
+
\begin{equation*} \| Y _ { m } \| _ { G } ^ { 2 } = \sum _ { i , j = 1 } ^ { k } g_{ij} \langle y _ { m + i - 1} , y _ { m  + j - 1} \rangle. \end{equation*}
  
G-stability is closely related to linear stability: If the generating polynomials have no common divisor, then the multi-step method is A-stable if and only if the corresponding one-leg method is G-stable. Thus, the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010061.png" />-step BDF method is G-stable. There is also a purely algebraic condition that implies G-stability.
+
G-stability is closely related to linear stability: If the generating polynomials have no common divisor, then the multi-step method is A-stable if and only if the corresponding one-leg method is G-stable. Thus, the $2$-step BDF method is G-stable. There is also a purely algebraic condition that implies G-stability.
  
 
The concepts of G-stability and algebraic stability have been successfully extended to general linear methods, see [[#References|[a1]]] and [[#References|[a4]]].
 
The concepts of G-stability and algebraic stability have been successfully extended to general linear methods, see [[#References|[a1]]] and [[#References|[a4]]].
Line 54: Line 62:
 
a) energy estimates;
 
a) energy estimates;
  
b) estimates for the linear problem, combined with perturbation techniques. Whereas energy estimates require algebraic stability on the part of the methods, linear stability (A(<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n120/n120100/n12010062.png" />)-stability) is sufficient for the second approach. (For an illustration of these techniques in connection with convergence, see [[#References|[a6]]].) Both approaches offer their merits. The latter, however, is in particular important for methods that are not B-stable, e.g., for linearly implicit Runge–Kutta methods.
+
b) estimates for the linear problem, combined with perturbation techniques. Whereas energy estimates require algebraic stability on the part of the methods, linear stability (A($\theta$)-stability) is sufficient for the second approach. (For an illustration of these techniques in connection with convergence, see [[#References|[a6]]].) Both approaches offer their merits. The latter, however, is in particular important for methods that are not B-stable, e.g., for linearly implicit Runge–Kutta methods.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J. Butcher,  "The numerical analysis of ordinary differential equations: Runge–Kutta and general linear methods" , Wiley  (1987)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  K. Dekker,  J.G. Verwer,  "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , North-Holland  (1984)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  G. Dahlquist,  "Error analysis for a class of methods for stiff non-linear initial value problems" , ''Numerical Analysis, Dundee 1975'' , ''Lecture Notes Math.'' , '''506''' , Springer  (1976)  pp. 60–72</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  E. Hairer,  G. Wanner,  "Solving ordinary differential equations II: Stiff and differential-algebraic problems" , Springer  (1996)  (Edition: Second, revised)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  E. Hairer,  M. Zennaro,  "On error growth functions of Runge–Kutta methods"  ''Appl. Numer. Math.'' , '''22'''  (1996)  pp. 205–216</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  Ch. Lubich,  A. Ostermann,  "Runge–Kutta approximations of quasi-linear parabolic equations"  ''Math. Comp.'' , '''64'''  (1995)  pp. 601–627</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  A. Stuart,  A.R. Humphries,  "Model problems in numerical stability theory for initial value problems"  ''SIAM Review'' , '''36'''  (1994)  pp. 226–257</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  J. Butcher,  "The numerical analysis of ordinary differential equations: Runge–Kutta and general linear methods" , Wiley  (1987)</td></tr><tr><td valign="top">[a2]</td> <td valign="top">  K. Dekker,  J.G. Verwer,  "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , North-Holland  (1984)</td></tr><tr><td valign="top">[a3]</td> <td valign="top">  G. Dahlquist,  "Error analysis for a class of methods for stiff non-linear initial value problems" , ''Numerical Analysis, Dundee 1975'' , ''Lecture Notes Math.'' , '''506''' , Springer  (1976)  pp. 60–72</td></tr><tr><td valign="top">[a4]</td> <td valign="top">  E. Hairer,  G. Wanner,  "Solving ordinary differential equations II: Stiff and differential-algebraic problems" , Springer  (1996)  (Edition: Second, revised)</td></tr><tr><td valign="top">[a5]</td> <td valign="top">  E. Hairer,  M. Zennaro,  "On error growth functions of Runge–Kutta methods"  ''Appl. Numer. Math.'' , '''22'''  (1996)  pp. 205–216</td></tr><tr><td valign="top">[a6]</td> <td valign="top">  Ch. Lubich,  A. Ostermann,  "Runge–Kutta approximations of quasi-linear parabolic equations"  ''Math. Comp.'' , '''64'''  (1995)  pp. 601–627</td></tr><tr><td valign="top">[a7]</td> <td valign="top">  A. Stuart,  A.R. Humphries,  "Model problems in numerical stability theory for initial value problems"  ''SIAM Review'' , '''36'''  (1994)  pp. 226–257</td></tr></table>

Latest revision as of 17:43, 1 July 2020

Numerical stability theory for the initial value problem $y ^ { \prime } = f ( x , y )$, $y ( 0 ) = y _ { 0 }$, where $f : \mathbf{R} \times \mathbf{C} ^ { n } \rightarrow \mathbf{C} ^ { n }$, is concerned with the question of whether the numerical discretization inherits the dynamic properties of the differential equation. Stability concepts are usually based on structural assumptions on $f$. For non-linear problems, the breakthrough was achieved by G. Dahlquist in his seminal paper [a3]. There, he studied multi-step discretizations of problems satisfying a one-sided Lipschitz condition. Let $\langle \, .\, ,\, . \, \rangle$ denote an inner product on $\mathbf{C} ^ { n }$ and let $\| .\|$ be the induced norm.

Runge–Kutta methods.

An $s$-stage Runge–Kutta discretization of $y ^ { \prime } = f ( x , y )$, $y ( 0 ) = y _ { 0 }$ is given by

\begin{equation*} y _ { 1 } = y _ { 0 } + h \sum _ { i = 1 } ^ { s } b _ { i }\, f ( x _ { 0 } + c _ { i } h , g _ { i } ). \end{equation*}

Here, $h$ denotes the step-size and $y_1$ is the Runge–Kutta approximation to $y ( x _ { 0 } + h )$. (For a thorough discussion of such methods, see [a1], [a2], and [a4]; see also Runge–Kutta method.)

B-stability.

If the problem satisfies the global contractivity condition

\begin{equation*} \operatorname { Re } \langle f ( x , y ) - f ( x , z ) , y - z \rangle \leq 0 , y , z \in \mathbf{C} ^ { n }, \end{equation*}

then the difference of two solutions is a non-increasing function of $x$. Let $y_1$, $z_1$ denote the numerical solutions after one step of size $h$ with initial values $y _ { 0 }$, $z_0$, respectively. A Runge–Kutta method is called B-stable (or sometimes BN-stable), if the contractivity condition implies $\| y _ { 1 } - z _ { 1 } \| \leq \| y _ { 0 } - z _ { 0 } \|$ for all $h > 0$. Examples of B-stable Runge–Kutta methods are given below. The definition of B-stability extends to arbitrary one-step methods in an obvious way.

Algebraic stability.

A Runge–Kutta method is called algebraically stable if its coefficients satisfy

i) $b _ { i } \geq 0$, $i = 1 , \dots , s$;

ii) $( b _ { i } a _ { i j } + b _ { j } a _ { j i } - b _ { i } b _ { j } ) _ { i , j = 1 } ^ { s }$ is positive semi-definite. Algebraic stability plays an important role in the theory of B-convergence. Note that algebraic stability implies B-stability. For non-confluent methods, i.e. $c _ { i } \neq c _ { j }$ for $i \neq j$, both concepts are equivalent. The following families of implicit Runge–Kutta methods are algebraically stable and therefore B-stable: Gauss, RadauIA, RadauIIA, LobattoIIIC.

Error growth function.

Let $\mathcal{F} _ { \nu }$ ($\nu \in \mathbf{R}$) denote the class of all problems satisfying the one-sided Lipschitz condition

\begin{equation*} \operatorname { Re } \langle f ( x , y ) - f ( x , z ) , y - z \rangle \leq \nu \| y - z \| ^ { 2 } , y , z \in \mathbf{C} ^ { n }. \end{equation*}

For $\nu > 0$, this condition is weaker than contractivity and allows trajectories to expand with increasing $x$. For any given real number $\xi $ and step-size $h > 0$, set $\nu = \xi / h$ and denote by $\varphi ( \xi )$ the smallest number for which the estimate $\| y _ { 1 } - z _ { 1 } \| \leq \varphi ( \xi ) \| y _ { 0 } - z _ { 0 } \|$ holds for all problems in $\mathcal{F} _ { \nu }$. The function $\varphi$ is called error growth function. For B-stable Runge–Kutta methods, the error growth function is superexponential, i.e. satisfies $\varphi ( 0 ) = 1$ and $\varphi ( \xi _ { 1 } ) \varphi ( \xi _ { 2 } ) \leq \varphi ( \xi _ { 1 } + \xi _ { 2 } )$ for all $\xi _ { 1 }$, $\xi_2$ having the same sign. This result can be used in the asymptotic stability analysis of Runge–Kutta methods, see [a5].

Linear multi-step methods.

A linear multi-step discretization of $y ^ { \prime } = f ( x , y )$ is given by

\begin{equation*} \sum _ { i = 0 } ^ { k } \alpha _ { i } y _ { m + i } = h \sum _ { i = 0 } ^ { k } \beta _ { i } \,f ( x _ { m + i} , y _ { m + i } ). \end{equation*}

Let $\rho ( \zeta ) = \sum _ { i = 0 } ^ { k } \alpha _ { i } \zeta ^ { i }$ and $\sigma ( \zeta ) = \sum _ { i = 0 } ^ { k } \beta _ { i } \zeta ^ { i }$ be the generating polynomials. Using the normalization $\sigma ( 1 ) = 1$, the associated one-leg method is defined by

\begin{equation*} \sum _ { i = 0 } ^ { k } \alpha _ { i } y _ { m + i } = h f \left( \sum _ { i = 0 } ^ { k } \beta _ { i } x _ { m + i } , \sum _ { i = 0 } ^ { k } \beta _ { i } y _ { m + i } \right). \end{equation*}

(For a thorough discussion, see [a4].) A one-leg method is called G-stable if there exists a real symmetric positive-definite $k$-dimensional matrix $G$ such that any two numerical solutions satisfy $\| Y _ { 1 } - Z _ { 1 } \| _ { G } \leq \| Y _ { 0 } - Z _ { 0 } \| _ { G }$ for all step-sizes $h > 0$, whenever the problem is contractive ($\nu = 0$). Here, $Y _ { m } = ( y _ { m + k - 1} , \ldots , y _ { m } ) ^ { T }$ and

\begin{equation*} \| Y _ { m } \| _ { G } ^ { 2 } = \sum _ { i , j = 1 } ^ { k } g_{ij} \langle y _ { m + i - 1} , y _ { m + j - 1} \rangle. \end{equation*}

G-stability is closely related to linear stability: If the generating polynomials have no common divisor, then the multi-step method is A-stable if and only if the corresponding one-leg method is G-stable. Thus, the $2$-step BDF method is G-stable. There is also a purely algebraic condition that implies G-stability.

The concepts of G-stability and algebraic stability have been successfully extended to general linear methods, see [a1] and [a4].

Notwithstanding the merits of B- and G-stability, contractive problems have quite simple dynamics. Other classes of problems have been considered that admit a more complex behaviour. For a review, see [a7].

The long-time behaviour of time discretizations of non-linear evolution equations is an active field of research at the moment (1998). Basically, two different approaches exist for the analysis of numerical stability:

a) energy estimates;

b) estimates for the linear problem, combined with perturbation techniques. Whereas energy estimates require algebraic stability on the part of the methods, linear stability (A($\theta$)-stability) is sufficient for the second approach. (For an illustration of these techniques in connection with convergence, see [a6].) Both approaches offer their merits. The latter, however, is in particular important for methods that are not B-stable, e.g., for linearly implicit Runge–Kutta methods.

References

[a1] J. Butcher, "The numerical analysis of ordinary differential equations: Runge–Kutta and general linear methods" , Wiley (1987)
[a2] K. Dekker, J.G. Verwer, "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , North-Holland (1984)
[a3] G. Dahlquist, "Error analysis for a class of methods for stiff non-linear initial value problems" , Numerical Analysis, Dundee 1975 , Lecture Notes Math. , 506 , Springer (1976) pp. 60–72
[a4] E. Hairer, G. Wanner, "Solving ordinary differential equations II: Stiff and differential-algebraic problems" , Springer (1996) (Edition: Second, revised)
[a5] E. Hairer, M. Zennaro, "On error growth functions of Runge–Kutta methods" Appl. Numer. Math. , 22 (1996) pp. 205–216
[a6] Ch. Lubich, A. Ostermann, "Runge–Kutta approximations of quasi-linear parabolic equations" Math. Comp. , 64 (1995) pp. 601–627
[a7] A. Stuart, A.R. Humphries, "Model problems in numerical stability theory for initial value problems" SIAM Review , 36 (1994) pp. 226–257
How to Cite This Entry:
Non-linear stability of numerical methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Non-linear_stability_of_numerical_methods&oldid=12027
This article was adapted from an original article by Alexander Ostermann (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article