Namespaces
Variants
Actions

Difference between revisions of "Differentiation, numerical"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
d0323201.png
 +
$#A+1 = 34 n = 0
 +
$#C+1 = 34 : ~/encyclopedia/old_files/data/D032/D.0302320 Differentiation, numerical
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
Finding the derivative of a function by numerical methods. Such differentiation is resorted to when the methods of [[Differential calculus|differential calculus]] are inapplicable (the function is obtained from tables) or involves considerable difficulties (the analytic expression of the function is complicated).
 
Finding the derivative of a function by numerical methods. Such differentiation is resorted to when the methods of [[Differential calculus|differential calculus]] are inapplicable (the function is obtained from tables) or involves considerable difficulties (the analytic expression of the function is complicated).
  
Let a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d0323201.png" /> be defined on an interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d0323202.png" /> and let the nodal points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d0323203.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d0323204.png" />, be given. The totality of points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d0323205.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d0323206.png" />, is known as a table. The result of numerical differentiation of the table is the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d0323207.png" /> which approximates, in some sense, the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d0323208.png" />-th derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d0323209.png" /> of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232010.png" /> on some sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232011.png" /> of points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232012.png" />. The use of numerical differentiation will be expedient if only an insignificant amount of computational effort is required to obtain the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232013.png" /> for each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232014.png" />. Linear methods of numerical differentiation are commonly employed; the result thus obtained is written in the form
+
Let a function $  u $
 +
be defined on an interval $  [ a , b ] $
 +
and let the nodal points $  x _ {i} $,
 +
$  a = x _ {1} < x _ {2} < \dots < x _ {n} = b $,  
 +
be given. The totality of points $  ( x _ {i} , u _ {i} = u ( x _ {i} ) ) $,  
 +
$  i = 1 \dots n $,  
 +
is known as a table. The result of numerical differentiation of the table is the function $  u _ {n}  ^ {k} ( x) $
 +
which approximates, in some sense, the $  k $-
 +
th derivative $  {d ^ {k} u ( x) } / {dx  ^ {k} } $
 +
of the function $  u $
 +
on some sets $  X _ {n}  ^ {k} $
 +
of points $  x $.  
 +
The use of numerical differentiation will be expedient if only an insignificant amount of computational effort is required to obtain the function $  u _ {n}  ^ {k} ( x) $
 +
for each $  x \in X _ {n}  ^ {k} $.  
 +
Linear methods of numerical differentiation are commonly employed; the result thus obtained is written in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232015.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
u _ {n}  ^ {k} ( x)  = \sum _ {i = 1 } ^ { n }
 +
u _ {i} a _ {i}  ^ {k} ( x) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232016.png" /> are functions defined on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232017.png" />. The most popular method for obtaining formulas (1) is as follows: One constructs the function
+
where $  a _ {i}  ^ {k} ( x) $
 +
are functions defined on $  X _ {n}  ^ {k} $.  
 +
The most popular method for obtaining formulas (1) is as follows: One constructs the function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232018.png" /></td> </tr></table>
+
$$
 +
u _ {n} ( x)  = \sum _ {i = 1 } ^ { n }  u _ {i} a _ {i} ( x) ,
 +
$$
  
interpolating <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232019.png" />, and assumes that
+
interpolating $  u ( x) $,  
 +
and assumes that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232020.png" /></td> </tr></table>
+
$$
 +
u _ {n}  ^ {k} ( x)  \equiv 
 +
\frac{d ^ {k} u _ {n} ( x) }{d x  ^ {k} }
  
The accuracy of the algorithms based on the interpolation formulas of Lagrange, Newton and others strongly depends on the selection of the manner of interpolation, and may sometimes be very low even if the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232021.png" /> is sufficiently smooth and if the number of nodal points is large [[#References|[1]]]. Algorithms of numerical differentiation involving spline-interpolation [[#References|[2]]] are often free from this disadvantage. If all that is needed is the computation of the approximate values of the derivative at the nodal points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232022.png" /> only, formula (1) assumes the form
+
= \sum _ {i = 1 } ^ { n }  u _ {i}
 +
\frac{d ^ {k} a _ {i} ( x) }{d x  ^ {k} }
 +
.
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232023.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
The accuracy of the algorithms based on the interpolation formulas of Lagrange, Newton and others strongly depends on the selection of the manner of interpolation, and may sometimes be very low even if the function  $  u $
 +
is sufficiently smooth and if the number of nodal points is large [[#References|[1]]]. Algorithms of numerical differentiation involving spline-interpolation [[#References|[2]]] are often free from this disadvantage. If all that is needed is the computation of the approximate values of the derivative at the nodal points  $  x _ {i} $
 +
only, formula (1) assumes the form
  
and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232024.png" /> is fully defined by specifying a coefficient matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232025.png" /> for a given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232026.png" />. Formulas such as (2) are known as difference formulas for numerical differentiation. The coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232027.png" /> of such formulas are determined from the condition that the difference
+
$$ \tag{2 }
 +
u _ {n}  ^ {k} ( x _ {j} ) = \sum _ {i = 1 } ^ { n }
 +
u _ {i} a _ {ij}  ^ {k}
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232028.png" /></td> </tr></table>
+
and  $  u _ {n}  ^ {k} ( x _ {j} ) $
 +
is fully defined by specifying a coefficient matrix  $  a _ {ij}  ^ {k} $
 +
for a given  $  k $.  
 +
Formulas such as (2) are known as difference formulas for numerical differentiation. The coefficients  $  a _ {ij}  ^ {k} $
 +
of such formulas are determined from the condition that the difference
  
has highest order of smallness with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232029.png" />. As a rule, formulas (2) are very simple and easy to handle. Thus, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232030.png" />, they assume the form
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232031.png" /></td> </tr></table>
+
\frac{d  ^ {k} u ( x _ {j} ) }{d x  ^ {k} }
 +
- \sum _ {i = 1 } ^ { n }  u _ {i} a _ {ij}  ^ {k}  = \xi _ {j}  ^ {n}
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232032.png" /></td> </tr></table>
+
has highest order of smallness with respect to  $  h _ {n} = \max _ {i}  | x _ {i+} 1 - x _ {i} | $.
 +
As a rule, formulas (2) are very simple and easy to handle. Thus, if  $  h = h _ {n} = ( x _ {2} - x _ {1} ) = \dots = ( x _ {n} - x _ {n-} 1 ) $,
 +
they assume the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232033.png" /></td> </tr></table>
+
$$
  
Numerical differentiation algorithms are often employed with tables in which the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032320/d03232034.png" /> are given (or obtained) inaccurately. In such cases there is need for a preliminary smoothing, since a direct application of the formula may result in large errors in the results [[#References|[3]]].
+
\frac{du ( x _ {j} ) }{dx}
 +
  = 
 +
\frac{u _ {j+} 1 - u _ {j} }{h}
 +
+ O ( h _ {n} )  = 
 +
\frac{u _ {j} - u _ {j-} 1 }{h}
 +
+ O ( h _ {n} ) ,
 +
$$
 +
 
 +
$$
 +
 
 +
\frac{d u ( x _ {j} ) }{dx}
 +
  = 
 +
\frac{u _ {j+} 1 - u _ {j-} 1 }{2h}
 +
+ O ( h  ^ {2} ) ,
 +
$$
 +
 
 +
$$
 +
 
 +
\frac{d  ^ {2} u ( x _ {j} ) }{dx}
 +
  ^ {2}  = 
 +
\frac{u _ {j+} 1 -
 +
2 u _ {j} + u _ {j-} 1 }{h  ^ {2} }
 +
+ O ( h  ^ {2} ) .
 +
$$
 +
 
 +
Numerical differentiation algorithms are often employed with tables in which the values of $  u ( x _ {i} ) $
 +
are given (or obtained) inaccurately. In such cases there is need for a preliminary smoothing, since a direct application of the formula may result in large errors in the results [[#References|[3]]].
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J.H. Ahlberg,  E.N. Nilson,  J.F. Walsh,  "Theory of splines and their applications" , Acad. Press  (1967)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.A. Morozov,  "The differentiation problem and certain algorithms of approximation by experimental information" , ''Computing methods and programming'' , '''14''' , Moscow  (1970)  pp. 46–62  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J.H. Ahlberg,  E.N. Nilson,  J.F. Walsh,  "Theory of splines and their applications" , Acad. Press  (1967)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.A. Morozov,  "The differentiation problem and certain algorithms of approximation by experimental information" , ''Computing methods and programming'' , '''14''' , Moscow  (1970)  pp. 46–62  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Latest revision as of 19:35, 5 June 2020


Finding the derivative of a function by numerical methods. Such differentiation is resorted to when the methods of differential calculus are inapplicable (the function is obtained from tables) or involves considerable difficulties (the analytic expression of the function is complicated).

Let a function $ u $ be defined on an interval $ [ a , b ] $ and let the nodal points $ x _ {i} $, $ a = x _ {1} < x _ {2} < \dots < x _ {n} = b $, be given. The totality of points $ ( x _ {i} , u _ {i} = u ( x _ {i} ) ) $, $ i = 1 \dots n $, is known as a table. The result of numerical differentiation of the table is the function $ u _ {n} ^ {k} ( x) $ which approximates, in some sense, the $ k $- th derivative $ {d ^ {k} u ( x) } / {dx ^ {k} } $ of the function $ u $ on some sets $ X _ {n} ^ {k} $ of points $ x $. The use of numerical differentiation will be expedient if only an insignificant amount of computational effort is required to obtain the function $ u _ {n} ^ {k} ( x) $ for each $ x \in X _ {n} ^ {k} $. Linear methods of numerical differentiation are commonly employed; the result thus obtained is written in the form

$$ \tag{1 } u _ {n} ^ {k} ( x) = \sum _ {i = 1 } ^ { n } u _ {i} a _ {i} ^ {k} ( x) , $$

where $ a _ {i} ^ {k} ( x) $ are functions defined on $ X _ {n} ^ {k} $. The most popular method for obtaining formulas (1) is as follows: One constructs the function

$$ u _ {n} ( x) = \sum _ {i = 1 } ^ { n } u _ {i} a _ {i} ( x) , $$

interpolating $ u ( x) $, and assumes that

$$ u _ {n} ^ {k} ( x) \equiv \frac{d ^ {k} u _ {n} ( x) }{d x ^ {k} } = \sum _ {i = 1 } ^ { n } u _ {i} \frac{d ^ {k} a _ {i} ( x) }{d x ^ {k} } . $$

The accuracy of the algorithms based on the interpolation formulas of Lagrange, Newton and others strongly depends on the selection of the manner of interpolation, and may sometimes be very low even if the function $ u $ is sufficiently smooth and if the number of nodal points is large [1]. Algorithms of numerical differentiation involving spline-interpolation [2] are often free from this disadvantage. If all that is needed is the computation of the approximate values of the derivative at the nodal points $ x _ {i} $ only, formula (1) assumes the form

$$ \tag{2 } u _ {n} ^ {k} ( x _ {j} ) = \sum _ {i = 1 } ^ { n } u _ {i} a _ {ij} ^ {k} $$

and $ u _ {n} ^ {k} ( x _ {j} ) $ is fully defined by specifying a coefficient matrix $ a _ {ij} ^ {k} $ for a given $ k $. Formulas such as (2) are known as difference formulas for numerical differentiation. The coefficients $ a _ {ij} ^ {k} $ of such formulas are determined from the condition that the difference

$$ \frac{d ^ {k} u ( x _ {j} ) }{d x ^ {k} } - \sum _ {i = 1 } ^ { n } u _ {i} a _ {ij} ^ {k} = \xi _ {j} ^ {n} $$

has highest order of smallness with respect to $ h _ {n} = \max _ {i} | x _ {i+} 1 - x _ {i} | $. As a rule, formulas (2) are very simple and easy to handle. Thus, if $ h = h _ {n} = ( x _ {2} - x _ {1} ) = \dots = ( x _ {n} - x _ {n-} 1 ) $, they assume the form

$$ \frac{du ( x _ {j} ) }{dx} = \frac{u _ {j+} 1 - u _ {j} }{h} + O ( h _ {n} ) = \frac{u _ {j} - u _ {j-} 1 }{h} + O ( h _ {n} ) , $$

$$ \frac{d u ( x _ {j} ) }{dx} = \frac{u _ {j+} 1 - u _ {j-} 1 }{2h} + O ( h ^ {2} ) , $$

$$ \frac{d ^ {2} u ( x _ {j} ) }{dx} ^ {2} = \frac{u _ {j+} 1 - 2 u _ {j} + u _ {j-} 1 }{h ^ {2} } + O ( h ^ {2} ) . $$

Numerical differentiation algorithms are often employed with tables in which the values of $ u ( x _ {i} ) $ are given (or obtained) inaccurately. In such cases there is need for a preliminary smoothing, since a direct application of the formula may result in large errors in the results [3].

References

[1] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian)
[2] J.H. Ahlberg, E.N. Nilson, J.F. Walsh, "Theory of splines and their applications" , Acad. Press (1967)
[3] V.A. Morozov, "The differentiation problem and certain algorithms of approximation by experimental information" , Computing methods and programming , 14 , Moscow (1970) pp. 46–62 (In Russian)

Comments

Explicit differentiation formulas are given in, e.g., [a1], [a2].

References

[a1] A. Segun, M. Abramowitz, "Handbook of mathematical functions" , Appl. Math. Ser. , 55 , Nat. Bur. Standards (1970)
[a2] F.B. Hildebrand, "Introduction to numerical analysis" , McGraw-Hill (1974)
How to Cite This Entry:
Differentiation, numerical. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Differentiation,_numerical&oldid=15489
This article was adapted from an original article by V.A. Morozov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article