Namespaces
Variants
Actions

Difference between revisions of "Rosenbrock methods"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
r1101501.png
 +
$#A+1 = 28 n = 0
 +
$#C+1 = 28 : ~/encyclopedia/old_files/data/R110/R.1100150 Rosenbrock methods
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A numerical integration method of the Runge–Kutta type for stiff systems of ordinary differential equations (cf. also [[Runge–Kutta method|Runge–Kutta method]]; [[Stiff differential system|Stiff differential system]]). For Rosenbrock methods these systems are usually put in the autonomous form
 
A numerical integration method of the Runge–Kutta type for stiff systems of ordinary differential equations (cf. also [[Runge–Kutta method|Runge–Kutta method]]; [[Stiff differential system|Stiff differential system]]). For Rosenbrock methods these systems are usually put in the autonomous form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r1101501.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
$$ \tag{a1 }
 +
{\dot{y} } = f ( y ) ,  t > t _ {0} ,  y ( t _ {0} ) = y _ {0} .
 +
$$
  
The property of stiffness, which means that the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r1101502.png" /> is composed of components possessing widely differing time constants, impedes some form of implicitness for reasons of numerical stability [[#References|[a1]]], [[#References|[a2]]]. The most well-known implicit method for stiff initial-value problems is the backward [[Euler method|Euler method]]
+
The property of stiffness, which means that the solution $  y $
 +
is composed of components possessing widely differing time constants, impedes some form of implicitness for reasons of numerical stability [[#References|[a1]]], [[#References|[a2]]]. The most well-known implicit method for stiff initial-value problems is the backward [[Euler method|Euler method]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r1101503.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
$$ \tag{a2 }
 +
y _ {n + 1 }  = y _ {n} + \tau f ( y _ {n + 1 }  ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r1101504.png" /> is the step size and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r1101505.png" /> the approximation to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r1101506.png" /> at time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r1101507.png" />. Since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r1101508.png" /> is defined implicitly, this numerical solution itself must also be approximated. Mostly the iterative [[Newton method|Newton method]] or a modification thereof is used, again for reasons of numerical stability. However, as the Euler method has only order of consistency <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r1101509.png" />, it makes no sense to carry out the iteration to high accuracy. In practice it often is sufficient to apply just one iteration per time step. If one then assumes that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015010.png" /> is used as initial iterate, the following numerical result is found:
+
where $  \tau = t _ {n + 1 }  - t _ {n} $
 +
is the step size and $  y _ {n} $
 +
the approximation to $  y ( t ) $
 +
at time $  t = t _ {n} $.  
 +
Since $  y _ {n + 1 }  $
 +
is defined implicitly, this numerical solution itself must also be approximated. Mostly the iterative [[Newton method|Newton method]] or a modification thereof is used, again for reasons of numerical stability. However, as the Euler method has only order of consistency $  p = 1 $,  
 +
it makes no sense to carry out the iteration to high accuracy. In practice it often is sufficient to apply just one iteration per time step. If one then assumes that $  y _ {n} $
 +
is used as initial iterate, the following numerical result is found:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015011.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a3)</td></tr></table>
+
$$ \tag{a3 }
 +
y _ {n + 1 }  = y _ {n} + k, \quad k = \tau f ( y _ {n} ) + \tau J k,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015012.png" /> is the unit matrix and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015013.png" /> denotes the [[Jacobian|Jacobian]] matrix of the vector function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015014.png" /> at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015015.png" />.
+
where $  I $
 +
is the unit matrix and $  J = f  ^  \prime  ( y _ {n} ) $
 +
denotes the [[Jacobian|Jacobian]] matrix of the vector function $  f $
 +
at $  y _ {n} $.
  
Of interest is that now the numerical solution is computed by solving a system of linear algebraic equations, rather than a system of non-linear equations. H.H. Rosenbrock, in his landmark paper [[#References|[a3]]] of 1963, has proposed to generalize this linearly implicit approach to methods using more stages so as to achieve a higher order of consistency <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015016.png" />. The crucial consideration hereby is that no longer the iterative Newton method is used, but instead stable formulas are derived by working the Jacobian matrix directly into the integration formula. His idea has found widespread use. A generally accepted form in the literature for a so-called <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015018.png" />-stage Rosenbrock method is (cf. [[#References|[a1]]], [[#References|[a2]]]):
+
Of interest is that now the numerical solution is computed by solving a system of linear algebraic equations, rather than a system of non-linear equations. H.H. Rosenbrock, in his landmark paper [[#References|[a3]]] of 1963, has proposed to generalize this linearly implicit approach to methods using more stages so as to achieve a higher order of consistency $  p $.  
 +
The crucial consideration hereby is that no longer the iterative Newton method is used, but instead stable formulas are derived by working the Jacobian matrix directly into the integration formula. His idea has found widespread use. A generally accepted form in the literature for a so-called $  s $-
 +
stage Rosenbrock method is (cf. [[#References|[a1]]], [[#References|[a2]]]):
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015019.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a4)</td></tr></table>
+
$$ \tag{a4 }
 +
y _ {n + 1 }  = y _ {n} + \sum _ {i = 1 } ^ { s }  b _ {i} k _ {i} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015020.png" /></td> </tr></table>
+
$$
 +
k _ {i} = \tau f \left ( y _ {n} + \sum _ {j = 1 } ^ { {i }  - 1 } \alpha _ {ij }  k _ {j} \right ) + \tau J \sum _ {j = 1 } ^ { i }  \gamma _ {ij }  k _ {j} .
 +
$$
  
Assuming that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015021.png" />, hence equal for all stages, then one time step costs one Jacobian evaluation, one LU-decomposition of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015022.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015023.png" /> backsolves accompanied by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015024.png" /> function evaluations. Per step, Rosenbrock methods are therefore computationally expensive. Yet they are attractive since they are of one-step type, can be made A-stable or L-stable (cf. also [[Stability|Stability]]) and are easy to use since no iteration procedure as for genuinely implicit methods is required.
+
Assuming that $  \gamma _ {ii }  = \gamma $,  
 +
hence equal for all stages, then one time step costs one Jacobian evaluation, one LU-decomposition of the matrix $  I - \tau \gamma J $
 +
and $  s $
 +
backsolves accompanied by $  s $
 +
function evaluations. Per step, Rosenbrock methods are therefore computationally expensive. Yet they are attractive since they are of one-step type, can be made A-stable or L-stable (cf. also [[Stability|Stability]]) and are easy to use since no iteration procedure as for genuinely implicit methods is required.
  
Rosenbrock methods are also called Runge–Kutta–Rosenbrock methods. If one puts <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015025.png" />, a classical explicit [[Runge–Kutta method|Runge–Kutta method]] results. Rosenbrock methods have been shown very valuable in practice. A good example is provided by the FORTRAN solver RODAS from [[#References|[a2]]]. The underlying method of this solver uses <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015026.png" /> stages, is L-stable and stiffly accurate and of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015027.png" />. This solver is also applicable to linearly implicit systems of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015028.png" />. In the literature, variants of the Rosenbrock method are discussed in which the Jacobian matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r110/r110150/r11015029.png" /> is frozen for a number of time steps or even replaced by an approximation which renders the linear system solution cheaper (see [[#References|[a1]]], [[#References|[a2]]]). For very large-scale problems this is obviously of practical importance as it can reduce CPU time considerably.
+
Rosenbrock methods are also called Runge–Kutta–Rosenbrock methods. If one puts $  J = 0 $,  
 +
a classical explicit [[Runge–Kutta method|Runge–Kutta method]] results. Rosenbrock methods have been shown very valuable in practice. A good example is provided by the FORTRAN solver RODAS from [[#References|[a2]]]. The underlying method of this solver uses $  6 $
 +
stages, is L-stable and stiffly accurate and of order $  p = 4 $.  
 +
This solver is also applicable to linearly implicit systems of the form $  M {\dot{y} } = f ( y ) $.  
 +
In the literature, variants of the Rosenbrock method are discussed in which the Jacobian matrix $  J $
 +
is frozen for a number of time steps or even replaced by an approximation which renders the linear system solution cheaper (see [[#References|[a1]]], [[#References|[a2]]]). For very large-scale problems this is obviously of practical importance as it can reduce CPU time considerably.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  K. Dekker,  J.G. Verwer,  "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , North-Holland  (1984)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  E. Hairer,  G. Wanner,  "Solving ordinary differential equations" , '''II: stiff and differential-algebraic problems''' , Springer  (1991)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  H.H. Rosenbrock,  "Some general implicit processes for the numerical solution of differential equations"  ''Comput. J.'' , '''5'''  (1963)  pp. 329–330</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  K. Dekker,  J.G. Verwer,  "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , North-Holland  (1984)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  E. Hairer,  G. Wanner,  "Solving ordinary differential equations" , '''II: stiff and differential-algebraic problems''' , Springer  (1991)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  H.H. Rosenbrock,  "Some general implicit processes for the numerical solution of differential equations"  ''Comput. J.'' , '''5'''  (1963)  pp. 329–330</TD></TR></table>

Latest revision as of 08:12, 6 June 2020


A numerical integration method of the Runge–Kutta type for stiff systems of ordinary differential equations (cf. also Runge–Kutta method; Stiff differential system). For Rosenbrock methods these systems are usually put in the autonomous form

$$ \tag{a1 } {\dot{y} } = f ( y ) , t > t _ {0} , y ( t _ {0} ) = y _ {0} . $$

The property of stiffness, which means that the solution $ y $ is composed of components possessing widely differing time constants, impedes some form of implicitness for reasons of numerical stability [a1], [a2]. The most well-known implicit method for stiff initial-value problems is the backward Euler method

$$ \tag{a2 } y _ {n + 1 } = y _ {n} + \tau f ( y _ {n + 1 } ) , $$

where $ \tau = t _ {n + 1 } - t _ {n} $ is the step size and $ y _ {n} $ the approximation to $ y ( t ) $ at time $ t = t _ {n} $. Since $ y _ {n + 1 } $ is defined implicitly, this numerical solution itself must also be approximated. Mostly the iterative Newton method or a modification thereof is used, again for reasons of numerical stability. However, as the Euler method has only order of consistency $ p = 1 $, it makes no sense to carry out the iteration to high accuracy. In practice it often is sufficient to apply just one iteration per time step. If one then assumes that $ y _ {n} $ is used as initial iterate, the following numerical result is found:

$$ \tag{a3 } y _ {n + 1 } = y _ {n} + k, \quad k = \tau f ( y _ {n} ) + \tau J k, $$

where $ I $ is the unit matrix and $ J = f ^ \prime ( y _ {n} ) $ denotes the Jacobian matrix of the vector function $ f $ at $ y _ {n} $.

Of interest is that now the numerical solution is computed by solving a system of linear algebraic equations, rather than a system of non-linear equations. H.H. Rosenbrock, in his landmark paper [a3] of 1963, has proposed to generalize this linearly implicit approach to methods using more stages so as to achieve a higher order of consistency $ p $. The crucial consideration hereby is that no longer the iterative Newton method is used, but instead stable formulas are derived by working the Jacobian matrix directly into the integration formula. His idea has found widespread use. A generally accepted form in the literature for a so-called $ s $- stage Rosenbrock method is (cf. [a1], [a2]):

$$ \tag{a4 } y _ {n + 1 } = y _ {n} + \sum _ {i = 1 } ^ { s } b _ {i} k _ {i} , $$

$$ k _ {i} = \tau f \left ( y _ {n} + \sum _ {j = 1 } ^ { {i } - 1 } \alpha _ {ij } k _ {j} \right ) + \tau J \sum _ {j = 1 } ^ { i } \gamma _ {ij } k _ {j} . $$

Assuming that $ \gamma _ {ii } = \gamma $, hence equal for all stages, then one time step costs one Jacobian evaluation, one LU-decomposition of the matrix $ I - \tau \gamma J $ and $ s $ backsolves accompanied by $ s $ function evaluations. Per step, Rosenbrock methods are therefore computationally expensive. Yet they are attractive since they are of one-step type, can be made A-stable or L-stable (cf. also Stability) and are easy to use since no iteration procedure as for genuinely implicit methods is required.

Rosenbrock methods are also called Runge–Kutta–Rosenbrock methods. If one puts $ J = 0 $, a classical explicit Runge–Kutta method results. Rosenbrock methods have been shown very valuable in practice. A good example is provided by the FORTRAN solver RODAS from [a2]. The underlying method of this solver uses $ 6 $ stages, is L-stable and stiffly accurate and of order $ p = 4 $. This solver is also applicable to linearly implicit systems of the form $ M {\dot{y} } = f ( y ) $. In the literature, variants of the Rosenbrock method are discussed in which the Jacobian matrix $ J $ is frozen for a number of time steps or even replaced by an approximation which renders the linear system solution cheaper (see [a1], [a2]). For very large-scale problems this is obviously of practical importance as it can reduce CPU time considerably.

References

[a1] K. Dekker, J.G. Verwer, "Stability of Runge–Kutta methods for stiff nonlinear differential equations" , North-Holland (1984)
[a2] E. Hairer, G. Wanner, "Solving ordinary differential equations" , II: stiff and differential-algebraic problems , Springer (1991)
[a3] H.H. Rosenbrock, "Some general implicit processes for the numerical solution of differential equations" Comput. J. , 5 (1963) pp. 329–330
How to Cite This Entry:
Rosenbrock methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Rosenbrock_methods&oldid=18016
This article was adapted from an original article by J. Verwer (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article