Namespaces
Variants
Actions

Difference between revisions of "Richardson extrapolation"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(latex details)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
A method for accelerating the convergence of solutions of difference problems (see [[Approximation of a differential boundary value problem by difference boundary value problems|Approximation of a differential boundary value problem by difference boundary value problems]]). The principal idea of the method consists in regarding the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r0818301.png" /> of a convergent difference problem for fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r0818302.png" /> as a function of the vanishing parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r0818303.png" /> of the difference grid, choosing the appropriate interpolation function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r0818304.png" /> based on several values of the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r0818305.png" /> for different <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r0818306.png" />, and calculating <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r0818307.png" /> which is an approximate value of the sought solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r0818308.png" /> — the limit of the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r0818309.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183010.png" />. Usually, the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183011.png" /> is sought in the form of an interpolation polynomial in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183012.png" />.
+
<!--
 +
r0818301.png
 +
$#A+1 = 73 n = 0
 +
$#C+1 = 73 : ~/encyclopedia/old_files/data/R081/R.0801830 Richardson extrapolation
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
 +
A method for accelerating the convergence of solutions of difference problems (see [[Approximation of a differential boundary value problem by difference boundary value problems|Approximation of a differential boundary value problem by difference boundary value problems]]). The principal idea of the method consists in regarding the solution $  u _ {h} ( x) $
 +
of a convergent difference problem for fixed $  x $
 +
as a function of the vanishing parameter $  h $
 +
of the difference grid, choosing the appropriate interpolation function $  \chi ( h) $
 +
based on several values of the solution $  u _ {h} ( x) $
 +
for different $  h $,  
 +
and calculating $  \chi ( 0) $
 +
which is an approximate value of the sought solution $  u ( x) $—  
 +
the limit of the sequence $  u _ {h} ( x) $
 +
as $  h \rightarrow 0 $.  
 +
Usually, the function $  \chi ( h) $
 +
is sought in the form of an interpolation polynomial in $  h $.
  
 
The method is called after L. Richardson [[#References|[1]]] who was the first to apply it to improve the precision of the solution of difference problems and called it a deferred approach to the limit.
 
The method is called after L. Richardson [[#References|[1]]] who was the first to apply it to improve the precision of the solution of difference problems and called it a deferred approach to the limit.
Line 5: Line 28:
 
The theoretical basis of the applicability of this method is the existence of an expansion
 
The theoretical basis of the applicability of this method is the existence of an expansion
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183013.png" /></td> </tr></table>
+
$$
 +
u _ {h} ( x)  = u ( x) +
 +
\sum_{i=1}^ { m-1}
 +
h ^ {\beta _ {i} }
 +
v _ {i} ( x) + h  ^ {B}
 +
\eta _ {h} ( x) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183014.png" />, the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183015.png" /> are independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183016.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183017.png" /> are the values of a grid function which is bounded when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183018.png" />. There are several theoretical methods for finding out whether such expansions exist or not [[#References|[4]]].
+
where $  B > \beta _ {m-1} > \dots > \beta _ {1} > 0 $,  
 +
the functions $  v _ {i} $
 +
are independent of $  h $,  
 +
and $  \eta _ {h} ( x) $
 +
are the values of a grid function which is bounded when $  h \rightarrow 0 $.  
 +
There are several theoretical methods for finding out whether such expansions exist or not [[#References|[4]]].
  
Usually, linear extrapolation is used: By <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183019.png" /> values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183020.png" /> at the same point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183021.png" /> for different parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183022.png" /> one calculates the extrapolated value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183023.png" /> by the formula
+
Usually, linear extrapolation is used: By $  m $
 +
values of $  u _ {h} ( x) $
 +
at the same point $  x $
 +
for different parameters $  h = h _ {1} \dots h _ {m} $
 +
one calculates the extrapolated value $  u _ {H} ( x) $
 +
by the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183024.png" /></td> </tr></table>
+
$$
 +
u _ {H} ( x)  = \sum_{k=1}^ { m }  \gamma _ {k} u _ {h _ {k}  } ( x) ,
 +
$$
  
where the weights <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183025.png" /> are defined by the following system of equations:
+
where the weights $  \gamma _ {k} $
 +
are defined by the following system of equations:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183026.png" /></td> </tr></table>
+
$$
 +
\sum_{k=1}^ { m }  \gamma _ {k}  = 1 ; \ \
 +
\sum_{k=1}^ { m }  \gamma _ {k} h _ {k} ^ {\beta _ {i} }  = 0 ,\ \
 +
i = 1 \dots m - 1 .
 +
$$
  
If among the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183027.png" /> there are no values which are too close to each other, then
+
If among the $  h _ {k} $
 +
there are no values which are too close to each other, then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183028.png" /></td> </tr></table>
+
$$
 +
| u _ {H} ( x) - u ( x) |  = O ( \overline{h}\; {}  ^ {B} ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183029.png" />, i.e. the order of convergence of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183030.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183031.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183032.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183033.png" />, which is greater than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183034.png" /> — the order of convergence of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183035.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183036.png" />. In two special cases there exist algorithms for calculating <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183037.png" /> without having to determine the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183038.png" />:
+
where $  \overline{h}\; = \max _ {1 \leq  k \leq  m }  h _ {k} $,  
 +
i.e. the order of convergence of $  u _ {H} ( x) $
 +
to $  u ( x) $
 +
as $  h \rightarrow 0 $
 +
is $  B $,  
 +
which is greater than $  \beta _ {1} $—  
 +
the order of convergence of $  u _ {h} ( x) $
 +
to $  u ( x) $.  
 +
In two special cases there exist algorithms for calculating $  u _ {H} ( x) $
 +
without having to determine the coefficients $  \gamma _ {k} $:
  
a) in the case when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183039.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183040.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183041.png" />, the method reduces to the interpolation of a polynomial in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183042.png" />, and from the [[Lagrange interpolation formula|Lagrange interpolation formula]] it follows that
+
a) in the case when $  \beta _ {i} = i p $,
 +
$  p > 0 $,  
 +
$  i = 1 \dots m - 1 $,  
 +
the method reduces to the interpolation of a polynomial in $  h  ^ {p} $,  
 +
and from the [[Lagrange interpolation formula|Lagrange interpolation formula]] it follows that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183043.png" /></td> </tr></table>
+
$$
 +
u _ {H} ( x)  = a _ {1}  ^ {(} m- 1) ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183044.png" /></td> </tr></table>
+
$$
 
+
a _ {j}  ^ {(} 0= u _ {h _ {j}  } ( x) ,\ \
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183045.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
j = 1 \dots m ;
 
+
$$
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183046.png" /></td> </tr></table>
 
 
 
b) in the case when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183047.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183048.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183049.png" />, formula (*) is replaced by
 
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183050.png" /></td> </tr></table>
+
$$ \tag{* }
 +
a _ {j}  ^ {(} i)  = a _ {j+1}  ^ {(} i- 1) +
 +
\frac{
 +
a _ {j+1}  ^ {(} i- 1) - a _ {j}  ^ {(} i- 1) }{( h _ {j} / h _ {i+j} )  ^ {p} - 1 }
 +
,
 +
$$
  
This algorithm, called Romberg's rule (W. Romberg, 1955), is widely applied in the construction of quadrature formulas (see [[#References|[5]]]). In order that for different <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183051.png" /> the grids <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183052.png" /> (see [[Approximation of a differential operator by difference operators|Approximation of a differential operator by difference operators]]) have as many nodes as possible to carry out the Richardson extrapolation, the parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183053.png" /> are chosen as part of one of the following sequences: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183054.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183055.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183056.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183057.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183058.png" />.
+
$$
 +
= 1 \dots m - i ,\  i  = 1 \dots m- 1;
 +
$$
  
Linear extrapolation is not the only possible way. For instance, in case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183059.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183060.png" />, as the interpolation function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183061.png" /> one can take rational functions of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183062.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183063.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183064.png" /> are polynomials in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183065.png" /> of degrees <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183066.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183067.png" />, respectively. Then the result of the rational extrapolation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183068.png" /> can be calculated by the following recurrent procedure:
+
b) in the case when  $  h _ {i} = h _ {0} b  ^ {i} $,  
 +
0 < b < 1 $,  
 +
$  i = 1 \dots m - 1 $,  
 +
formula (*) is replaced by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183069.png" /></td> </tr></table>
+
$$
 +
a _ {j}  ^ {(} i)  = \
 +
a _ {j+1}  ^ {(} i- 1) +
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183070.png" /></td> </tr></table>
+
\frac{a _ {j+1}  ^ {(} i- 1) - a _ {j}  ^ {(i- 1)} }{b ^ {\beta _ {i} } - 1 }
 +
.
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183071.png" /></td> </tr></table>
+
This algorithm, called Romberg's rule (W. Romberg, 1955), is widely applied in the construction of quadrature formulas (see [[#References|[5]]]). In order that for different  $  h _ {i} $
 +
the grids  $  D _ {h _ {i}  U } $(
 +
see [[Approximation of a differential operator by difference operators|Approximation of a differential operator by difference operators]]) have as many nodes as possible to carry out the Richardson extrapolation, the parameters  $  h _ {i} $
 +
are chosen as part of one of the following sequences:  $  h _ {i} = h _ {0} / i $,
 +
$  i = 1 , 2 ,\dots $;  
 +
$  h _ {i} = h _ {0} 2  ^ {1-i} $,
 +
$  i = 1 , 2 ,\dots $;  
 +
$  h _ {0} , h _ {0} / 2 , h _ {0} / 3 , h _ {0} / 4 , h _ {0} / 6 , h _ {0} / 8 , h _ {0} / 12 ,\dots $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183072.png" /></td> </tr></table>
+
Linear extrapolation is not the only possible way. For instance, in case  $  \beta _ {i} = i p $,
 +
$  p > 0 $,
 +
as the interpolation function  $  \chi ( h) $
 +
one can take rational functions of the form  $  \phi ( h  ^ {p} ) / \psi ( h  ^ {p} ) $,
 +
where  $  \phi ( t) $,
 +
$  \psi ( t) $
 +
are polynomials in  $  t $
 +
of degrees  $  [ ( m- 1)/2 ] $
 +
and  $  [ m/2] $,
 +
respectively. Then the result of the rational extrapolation  $  u _ {H} ( x) = \chi ( 0) $
 +
can be calculated by the following recurrent procedure:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r081/r081830/r08183073.png" /></td> </tr></table>
+
$$
 +
d _ {j}  ^ {(-1)}  = 0 ,\ \
 +
j = 2 \dots m ;
 +
$$
  
Richardson extrapolation is conveniently realized on a computer, since in order to achieve high accuracy it uses the repeated solution of simple difference problems (sometimes with little modifications) of low order of approximation for which standard methods of solution and computer programs are usually well developed.
+
$$
 +
d _ {j}  ^ {(} 0)  =  u _ {h _ {j}  } ( x) ,\  j = 1 \dots m ;
 +
$$
  
====References====
+
$$
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.F. Richardson,  "The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stress in a masonry dam"  ''Philos. Trans. Roy. Soc. Ser. A'' , '''210''' (1910pp. 307–357</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> R. Bulirsh,  J. Stoer,  "Fehlerabschätzungen und Extrapolation mit rationaler Funktionen bei Verfahren vom Richardson-Typus"  ''Numer. Math.'' , '''6''' :  5  (1964) pp. 413–427</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  D.C. Joyce,  "Survey of extrapolation processes in numerical analysis"  ''SIAM Review'' , '''13''' :  4  (1971)  pp. 435–490</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  G.I. Marchuk,  V.V. Shaidurov,  "Difference methods and their extrapolations" , Springer (1983)  (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977) (Translated from Russian)</TD></TR></table>
+
d _ {j} ^ {(} i)  =  d _ {j+1} ^ {(} i- 1) + (
 +
d _ {j+1} ^ {(} i- 1) - d _ {j} ^ {(} i- 1) ) \times
 +
$$
  
 +
$$
 +
\times
 +
\left \{ \left (
  
 +
 +
\frac{h _ j}{h _ {i+j}} \right )  ^ {p} \left [ 1
 +
-
 +
\frac{d _ {j+1}  ^ {(} i- 1) - d _ {j}  ^ {(} i- 1) }{d _ {j+1}  ^ {(} i- 1) - d _ {j+1}  ^ {(} i- 1) }
 +
\right ] - 1 \right \}  ^ {-1},
 +
$$
  
====Comments====
+
$$
 +
= 1 \dots m - i ,\  i  = 1 \dots m- 1 ; \  u _ {H} ( x)  = d _ {1}  ^ {(} m- 1) .
 +
$$
  
 +
Richardson extrapolation is conveniently realized on a computer, since in order to achieve high accuracy it uses the repeated solution of simple difference problems (sometimes with little modifications) of low order of approximation for which standard methods of solution and computer programs are usually well developed.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  T. Hivie,  "Generalized Neville type extrapolation schemes"  ''BIT'' , '''9'''  (1979)  pp. 204–213</TD></TR></table>
+
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.F. Richardson,  "The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stress in a masonry dam"  ''Philos. Trans. Roy. Soc. Ser. A'' , '''210'''  (1910)  pp. 307–357</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  R. Bulirsh,  J. Stoer,  "Fehlerabschätzungen und Extrapolation mit rationaler Funktionen bei Verfahren vom Richardson-Typus"  ''Numer. Math.'' , '''6''' :  5  (1964)  pp. 413–427</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  D.C. Joyce,  "Survey of extrapolation processes in numerical analysis"  ''SIAM Review'' , '''13''' :  4  (1971)  pp. 435–490</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  G.I. Marchuk,  V.V. Shaidurov,  "Difference methods and their extrapolations" , Springer  (1983)  (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR>
 +
<TR><TD valign="top">[a1]</TD> <TD valign="top">  T. Hivie,  "Generalized Neville type extrapolation schemes"  ''BIT'' , '''9'''  (1979)  pp. 204–213</TD></TR></table>

Latest revision as of 17:08, 13 January 2024


A method for accelerating the convergence of solutions of difference problems (see Approximation of a differential boundary value problem by difference boundary value problems). The principal idea of the method consists in regarding the solution $ u _ {h} ( x) $ of a convergent difference problem for fixed $ x $ as a function of the vanishing parameter $ h $ of the difference grid, choosing the appropriate interpolation function $ \chi ( h) $ based on several values of the solution $ u _ {h} ( x) $ for different $ h $, and calculating $ \chi ( 0) $ which is an approximate value of the sought solution $ u ( x) $— the limit of the sequence $ u _ {h} ( x) $ as $ h \rightarrow 0 $. Usually, the function $ \chi ( h) $ is sought in the form of an interpolation polynomial in $ h $.

The method is called after L. Richardson [1] who was the first to apply it to improve the precision of the solution of difference problems and called it a deferred approach to the limit.

The theoretical basis of the applicability of this method is the existence of an expansion

$$ u _ {h} ( x) = u ( x) + \sum_{i=1}^ { m-1} h ^ {\beta _ {i} } v _ {i} ( x) + h ^ {B} \eta _ {h} ( x) , $$

where $ B > \beta _ {m-1} > \dots > \beta _ {1} > 0 $, the functions $ v _ {i} $ are independent of $ h $, and $ \eta _ {h} ( x) $ are the values of a grid function which is bounded when $ h \rightarrow 0 $. There are several theoretical methods for finding out whether such expansions exist or not [4].

Usually, linear extrapolation is used: By $ m $ values of $ u _ {h} ( x) $ at the same point $ x $ for different parameters $ h = h _ {1} \dots h _ {m} $ one calculates the extrapolated value $ u _ {H} ( x) $ by the formula

$$ u _ {H} ( x) = \sum_{k=1}^ { m } \gamma _ {k} u _ {h _ {k} } ( x) , $$

where the weights $ \gamma _ {k} $ are defined by the following system of equations:

$$ \sum_{k=1}^ { m } \gamma _ {k} = 1 ; \ \ \sum_{k=1}^ { m } \gamma _ {k} h _ {k} ^ {\beta _ {i} } = 0 ,\ \ i = 1 \dots m - 1 . $$

If among the $ h _ {k} $ there are no values which are too close to each other, then

$$ | u _ {H} ( x) - u ( x) | = O ( \overline{h}\; {} ^ {B} ) , $$

where $ \overline{h}\; = \max _ {1 \leq k \leq m } h _ {k} $, i.e. the order of convergence of $ u _ {H} ( x) $ to $ u ( x) $ as $ h \rightarrow 0 $ is $ B $, which is greater than $ \beta _ {1} $— the order of convergence of $ u _ {h} ( x) $ to $ u ( x) $. In two special cases there exist algorithms for calculating $ u _ {H} ( x) $ without having to determine the coefficients $ \gamma _ {k} $:

a) in the case when $ \beta _ {i} = i p $, $ p > 0 $, $ i = 1 \dots m - 1 $, the method reduces to the interpolation of a polynomial in $ h ^ {p} $, and from the Lagrange interpolation formula it follows that

$$ u _ {H} ( x) = a _ {1} ^ {(} m- 1) , $$

where

$$ a _ {j} ^ {(} 0) = u _ {h _ {j} } ( x) ,\ \ j = 1 \dots m ; $$

$$ \tag{* } a _ {j} ^ {(} i) = a _ {j+1} ^ {(} i- 1) + \frac{ a _ {j+1} ^ {(} i- 1) - a _ {j} ^ {(} i- 1) }{( h _ {j} / h _ {i+j} ) ^ {p} - 1 } , $$

$$ j = 1 \dots m - i ,\ i = 1 \dots m- 1; $$

b) in the case when $ h _ {i} = h _ {0} b ^ {i} $, $ 0 < b < 1 $, $ i = 1 \dots m - 1 $, formula (*) is replaced by

$$ a _ {j} ^ {(} i) = \ a _ {j+1} ^ {(} i- 1) + \frac{a _ {j+1} ^ {(} i- 1) - a _ {j} ^ {(i- 1)} }{b ^ {\beta _ {i} } - 1 } . $$

This algorithm, called Romberg's rule (W. Romberg, 1955), is widely applied in the construction of quadrature formulas (see [5]). In order that for different $ h _ {i} $ the grids $ D _ {h _ {i} U } $( see Approximation of a differential operator by difference operators) have as many nodes as possible to carry out the Richardson extrapolation, the parameters $ h _ {i} $ are chosen as part of one of the following sequences: $ h _ {i} = h _ {0} / i $, $ i = 1 , 2 ,\dots $; $ h _ {i} = h _ {0} 2 ^ {1-i} $, $ i = 1 , 2 ,\dots $; $ h _ {0} , h _ {0} / 2 , h _ {0} / 3 , h _ {0} / 4 , h _ {0} / 6 , h _ {0} / 8 , h _ {0} / 12 ,\dots $.

Linear extrapolation is not the only possible way. For instance, in case $ \beta _ {i} = i p $, $ p > 0 $, as the interpolation function $ \chi ( h) $ one can take rational functions of the form $ \phi ( h ^ {p} ) / \psi ( h ^ {p} ) $, where $ \phi ( t) $, $ \psi ( t) $ are polynomials in $ t $ of degrees $ [ ( m- 1)/2 ] $ and $ [ m/2] $, respectively. Then the result of the rational extrapolation $ u _ {H} ( x) = \chi ( 0) $ can be calculated by the following recurrent procedure:

$$ d _ {j} ^ {(-1)} = 0 ,\ \ j = 2 \dots m ; $$

$$ d _ {j} ^ {(} 0) = u _ {h _ {j} } ( x) ,\ j = 1 \dots m ; $$

$$ d _ {j} ^ {(} i) = d _ {j+1} ^ {(} i- 1) + ( d _ {j+1} ^ {(} i- 1) - d _ {j} ^ {(} i- 1) ) \times $$

$$ \times \left \{ \left ( \frac{h _ j}{h _ {i+j}} \right ) ^ {p} \left [ 1 - \frac{d _ {j+1} ^ {(} i- 1) - d _ {j} ^ {(} i- 1) }{d _ {j+1} ^ {(} i- 1) - d _ {j+1} ^ {(} i- 1) } \right ] - 1 \right \} ^ {-1}, $$

$$ j = 1 \dots m - i ,\ i = 1 \dots m- 1 ; \ u _ {H} ( x) = d _ {1} ^ {(} m- 1) . $$

Richardson extrapolation is conveniently realized on a computer, since in order to achieve high accuracy it uses the repeated solution of simple difference problems (sometimes with little modifications) of low order of approximation for which standard methods of solution and computer programs are usually well developed.

References

[1] L.F. Richardson, "The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stress in a masonry dam" Philos. Trans. Roy. Soc. Ser. A , 210 (1910) pp. 307–357
[2] R. Bulirsh, J. Stoer, "Fehlerabschätzungen und Extrapolation mit rationaler Funktionen bei Verfahren vom Richardson-Typus" Numer. Math. , 6 : 5 (1964) pp. 413–427
[3] D.C. Joyce, "Survey of extrapolation processes in numerical analysis" SIAM Review , 13 : 4 (1971) pp. 435–490
[4] G.I. Marchuk, V.V. Shaidurov, "Difference methods and their extrapolations" , Springer (1983) (Translated from Russian)
[5] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[a1] T. Hivie, "Generalized Neville type extrapolation schemes" BIT , 9 (1979) pp. 204–213
How to Cite This Entry:
Richardson extrapolation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Richardson_extrapolation&oldid=17677
This article was adapted from an original article by V.V. Shardurov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article