Namespaces
Variants
Actions

Difference between revisions of "Chebyshev method"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(TeX)
Line 1: Line 1:
 +
{{TEX|done}}
 +
 
A method for obtaining a class of iteration algorithms (cf. [[Iteration algorithm|Iteration algorithm]]) for finding a simple real root of an equation
 
A method for obtaining a class of iteration algorithms (cf. [[Iteration algorithm|Iteration algorithm]]) for finding a simple real root of an equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c0219101.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$f(x)=0,\tag{1}$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c0219102.png" /> is a sufficiently smooth function.
+
where $f$ is a sufficiently smooth function.
  
The basis of the method lies in the formal representation of the inverse function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c0219103.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c0219104.png" /> via the Taylor formula. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c0219105.png" /> is a sufficiently close approximation to a root of equation (1), if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c0219106.png" /> and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c0219107.png" />, then
+
The basis of the method lies in the formal representation of the inverse function $x=F(y)$ of $f(x)$ via the Taylor formula. If $\alpha$ is a sufficiently close approximation to a root of equation \ref{1}, if $c_0=\beta=f(\alpha)$ and if $f'(\alpha)\neq0$, then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c0219108.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$x=F(y)=\alpha+\sum_{n\geq1}d_n(y-\beta)^n,\tag{2}$$
  
where the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c0219109.png" /> are defined recursively from the identity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c02191010.png" /> using the Taylor coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c02191011.png" /> of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c02191012.png" />. Putting <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c02191013.png" /> in (2), one obtains the relation
+
where the coefficients $d_n$ are defined recursively from the identity $x\equiv F(f(x))$ using the Taylor coefficients $c_n$ of the function $f(x)=\sum_{n\geq0}c_n(x-\alpha)^n$. Putting $y=0$ in \ref{2}, one obtains the relation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c02191014.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$x=\alpha-\frac{f(\alpha)}{f'(\alpha)}-\left(\frac{f(\alpha)}{f'(\alpha)}\right)^2\frac{f''(\alpha)}{2f'(\alpha)}+\tag{3}$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c02191015.png" /></td> </tr></table>
+
$$-\left(\frac{f(\alpha)}{f'(\alpha)}\right)^3\left(\frac12\left(\frac{f''(\alpha)}{f'(\alpha)}\right)^2-\frac{f'''(\alpha)}{6f'(\alpha)}\right)+$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c02191016.png" /></td> </tr></table>
+
$$-\left(\frac{f(\alpha)}{f'(\alpha)}\right)^4\left(\frac58\left(\frac{f''(\alpha)}{f'(\alpha)}\right)^3-\frac{5f''(\alpha)f'''(\alpha)}{12(f'(\alpha))^2}+\frac{f^{IV}(\alpha)}{24f'(\alpha)}\right)-\mathinner{\ldotp\ldotp\ldotp\ldotp}$$
  
Taking a certain number of terms on the right-hand side of (3) gives a formula for the iteration algorithm; with two terms, for example, one obtains Newton's method, while with three terms one obtains an iteration method of the form
+
Taking a certain number of terms on the right-hand side of \ref{3} gives a formula for the iteration algorithm; with two terms, for example, one obtains Newton's method, while with three terms one obtains an iteration method of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c02191017.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}-\left(\frac{f(x_n)}{f'(x_n)}\right)^2\frac{f''(x_n)}{2f'(x_n)},\quad n=0,1,\mathinner{\ldotp\ldotp\ldotp\ldotp}\tag{4}$$
  
The rate of convergence of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c02191018.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021910/c02191019.png" /> increases with the number of terms of (3) taken into consideration (cf. [[#References|[2]]]). The method can be extended to functional equations (cf. [[#References|[3]]]).
+
The rate of convergence of the $x_n$ to $x$ increases with the number of terms of \ref{3} taken into consideration (cf. [[#References|[2]]]). The method can be extended to functional equations (cf. [[#References|[3]]]).
  
 
====References====
 
====References====

Revision as of 17:21, 28 June 2015


A method for obtaining a class of iteration algorithms (cf. Iteration algorithm) for finding a simple real root of an equation

$$f(x)=0,\tag{1}$$

where $f$ is a sufficiently smooth function.

The basis of the method lies in the formal representation of the inverse function $x=F(y)$ of $f(x)$ via the Taylor formula. If $\alpha$ is a sufficiently close approximation to a root of equation \ref{1}, if $c_0=\beta=f(\alpha)$ and if $f'(\alpha)\neq0$, then

$$x=F(y)=\alpha+\sum_{n\geq1}d_n(y-\beta)^n,\tag{2}$$

where the coefficients $d_n$ are defined recursively from the identity $x\equiv F(f(x))$ using the Taylor coefficients $c_n$ of the function $f(x)=\sum_{n\geq0}c_n(x-\alpha)^n$. Putting $y=0$ in \ref{2}, one obtains the relation

$$x=\alpha-\frac{f(\alpha)}{f'(\alpha)}-\left(\frac{f(\alpha)}{f'(\alpha)}\right)^2\frac{f''(\alpha)}{2f'(\alpha)}+\tag{3}$$

$$-\left(\frac{f(\alpha)}{f'(\alpha)}\right)^3\left(\frac12\left(\frac{f''(\alpha)}{f'(\alpha)}\right)^2-\frac{f'''(\alpha)}{6f'(\alpha)}\right)+$$

$$-\left(\frac{f(\alpha)}{f'(\alpha)}\right)^4\left(\frac58\left(\frac{f''(\alpha)}{f'(\alpha)}\right)^3-\frac{5f''(\alpha)f'''(\alpha)}{12(f'(\alpha))^2}+\frac{f^{IV}(\alpha)}{24f'(\alpha)}\right)-\mathinner{\ldotp\ldotp\ldotp\ldotp}$$

Taking a certain number of terms on the right-hand side of \ref{3} gives a formula for the iteration algorithm; with two terms, for example, one obtains Newton's method, while with three terms one obtains an iteration method of the form

$$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}-\left(\frac{f(x_n)}{f'(x_n)}\right)^2\frac{f''(x_n)}{2f'(x_n)},\quad n=0,1,\mathinner{\ldotp\ldotp\ldotp\ldotp}\tag{4}$$

The rate of convergence of the $x_n$ to $x$ increases with the number of terms of \ref{3} taken into consideration (cf. [2]). The method can be extended to functional equations (cf. [3]).

References

[1a] P.L. Chebyshev, , Collected works , 5 , Moscow-Leningrad (1951) pp. 7–25 (In Russian)
[1b] P.L. Chebyshev, , Collected works , 5 , Moscow-Leningrad (1951) pp. 173–176 (In Russian)
[2] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian)
[3] M.I. Nechepurenko, Uspekhi Mat. Nauk , 9 : 2 (1954) pp. 163–170


Comments

This method is also called Chebyshev's root-finding method. A related approach is based on inverse interpolation, cf. [a1].

References

[a1] A. Ralston, "A first course in numerical analysis" , McGraw-Hill (1965)
[a2] P.J. Davis, "Interpolation and approximation" , Dover, reprint (1975) pp. 108–126
[a3] F.B. Hildebrand, "Introduction to numerical analysis" , McGraw-Hill (1974)
How to Cite This Entry:
Chebyshev method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Chebyshev_method&oldid=11619
This article was adapted from an original article by V.I. Lebedev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article