Namespaces
Variants
Actions

Difference between revisions of "Jackson inequality"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
An inequality estimating the rate of decrease of the [[Best approximation|best approximation]] error of a function by trigonometric or algebraic polynomials in dependence on its differentiability and finite-difference properties. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j0540001.png" /> be a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j0540002.png" />-periodic continuous function on the real axis, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j0540003.png" /> be the best uniform approximation error of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j0540004.png" /> by trigonometric polynomials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j0540005.png" /> of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j0540006.png" />, i.e.
+
<!--
 +
j0540001.png
 +
$#A+1 = 44 n = 0
 +
$#C+1 = 44 : ~/encyclopedia/old_files/data/J054/J.0504000 Jackson inequality
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j0540007.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
 +
An inequality estimating the rate of decrease of the [[Best approximation|best approximation]] error of a function by trigonometric or algebraic polynomials in dependence on its differentiability and finite-difference properties. Let  $  f $
 +
be a  $  2 \pi $-
 +
periodic continuous function on the real axis, let  $  E _ {n} ( f  ) $
 +
be the best uniform approximation error of  $  f $
 +
by trigonometric polynomials  $  T _ {n} $
 +
of degree  $  n $,
 +
i.e.
 +
 
 +
$$
 +
E _ {n} ( f  )  =  \inf _ {T _ {n} }  \max _ { x }
 +
| f ( x) - T _ {n} ( x) | ,
 +
$$
  
 
and let
 
and let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j0540008.png" /></td> </tr></table>
+
$$
 +
\omega ( f ; \delta )  = \max _ {| t _ {1} - t _ {2} |
 +
\leq  \delta }  | f ( t _ {1} ) - f ( t _ {2} ) |
 +
$$
  
be the modulus of continuity of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j0540009.png" /> (cf. [[Continuity, modulus of|Continuity, modulus of]]). It was shown by D. Jackson [[#References|[1]]] that
+
be the modulus of continuity of $  f $(
 +
cf. [[Continuity, modulus of|Continuity, modulus of]]). It was shown by D. Jackson [[#References|[1]]] that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400010.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
$$ \tag{* }
 +
E _ {n} ( f  )  \leq  C \omega \left ( f ;
 +
\frac{1}{n}
 +
\right )
 +
$$
  
(where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400011.png" /> is an absolute constant), while if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400012.png" /> has an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400013.png" />-th order continuous derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400015.png" />, then
+
(where $  C $
 +
is an absolute constant), while if $  f $
 +
has an $  r $-
 +
th order continuous derivative $  f ^ { ( r) } $,  
 +
$  r \geq  1 $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400016.png" /></td> </tr></table>
+
$$
 +
E _ {n} ( f  )  \leq 
 +
\frac{C _ {r} }{n  ^ {r} }
 +
\omega
 +
\left ( f ^ { ( r) } ;
 +
\frac{1}{n}
 +
\right ) ,
 +
$$
  
where the constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400017.png" /> depends on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400018.png" /> only. S.N. Bernshtein [[#References|[3]]] obtained inequality (*) in an independent manner for the case
+
where the constant $  C _ {r} $
 +
depends on $  r $
 +
only. S.N. Bernshtein [[#References|[3]]] obtained inequality (*) in an independent manner for the case
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400019.png" /></td> </tr></table>
+
$$
 +
\omega ( f ; t )  \leq  K t  ^  \alpha  ,\ \
 +
0 < \alpha < 1 .
 +
$$
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400020.png" /> is continuous or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400021.png" /> times continuously differentiable on a closed interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400022.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400023.png" /> and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400024.png" /> is the best uniform approximation error of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400025.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400026.png" /> by algebraic polynomials of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400027.png" />, then, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400028.png" /> one has the relation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400029.png" />
+
If $  f $
 +
is continuous or $  r $
 +
times continuously differentiable on a closed interval $  [ a , b ] $,
 +
$  r = 1, 2 \dots $
 +
and if $  E _ {n} ( f ;  a , b ) $
 +
is the best uniform approximation error of the function $  f $
 +
on $  [ a  , b ] $
 +
by algebraic polynomials of degree $  n $,  
 +
then, for $  n > r $
 +
one has the relation $  ( f ^ { 0 } = f  ) $
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400030.png" /></td> </tr></table>
+
$$
 +
E _ {n} ( f ; a , b )  \leq 
 +
\frac{A _ {r} ( b - a )  ^ {r} }{n  ^ {r} }
 +
\omega \left ( f ^ { ( r) } ;
 +
\frac{b - a }{n}
 +
\right ) ,
 +
$$
  
where the constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400031.png" /> depends on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400032.png" /> only.
+
where the constant $  A _ {r} $
 +
depends on $  r $
 +
only.
  
The Jackson inequalities are also known as the Jackson theorems or as direct theorems in the theory of approximation of functions. They may be generalized in various directions: to approximation using an integral metric, to approximation by entire functions of finite order, to an estimate concerning the approximation using a modulus of smoothness of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400033.png" />, or to a function of several variables. The exact values of the constants in Jackson's inequalities have been determined in several cases.
+
The Jackson inequalities are also known as the Jackson theorems or as direct theorems in the theory of approximation of functions. They may be generalized in various directions: to approximation using an integral metric, to approximation by entire functions of finite order, to an estimate concerning the approximation using a modulus of smoothness of order $  k $,  
 +
or to a function of several variables. The exact values of the constants in Jackson's inequalities have been determined in several cases.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  D. Jackson,  "Ueber die Genauigkeit der Annäherung stetiger Funktionen durch ganze rationale Funktionen gegebenen Grades und trigonometrische Summen gegebener Ordnung" , Göttingen  (1911)  (Thesis)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  S.M. Nikol'skii,  "Approximation of functions of several variables and imbedding theorems" , Springer  (1975)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  S.N. Bernshtein,  "On the best approximation of continuous functions by polynomials of a given degree (1912)" , ''Collected works'' , '''1''' , Moscow  (1952)  pp. 11–104</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  N.P. Korneichuk,  "Extremal problems in approximation theory" , Moscow  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  G.G. Lorentz,  "Approximation of functions" , Holt, Rinehart &amp; Winston  (1966)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  D. Jackson,  "Ueber die Genauigkeit der Annäherung stetiger Funktionen durch ganze rationale Funktionen gegebenen Grades und trigonometrische Summen gegebener Ordnung" , Göttingen  (1911)  (Thesis)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  S.M. Nikol'skii,  "Approximation of functions of several variables and imbedding theorems" , Springer  (1975)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  S.N. Bernshtein,  "On the best approximation of continuous functions by polynomials of a given degree (1912)" , ''Collected works'' , '''1''' , Moscow  (1952)  pp. 11–104</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  N.P. Korneichuk,  "Extremal problems in approximation theory" , Moscow  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  G.G. Lorentz,  "Approximation of functions" , Holt, Rinehart &amp; Winston  (1966)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
See also [[Approximation of functions, direct and inverse theorems|Approximation of functions, direct and inverse theorems]].
 
See also [[Approximation of functions, direct and inverse theorems|Approximation of functions, direct and inverse theorems]].
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400034.png" /> be the modulus of continuity of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400036.png" />,
+
Let $  \omega _ {k} ( f;  \delta ) $
 +
be the modulus of continuity of order $  k $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400037.png" /></td> </tr></table>
+
$$
 +
\omega _ {k} ( f; \delta )  = \
 +
\sup _ {
 +
\begin{array}{c}
 +
| h | \leq  t \\
 +
x, x + kh \in [ a, b]
 +
\end{array}
 +
} \
 +
\left |
 +
\sum _ {\nu = 0 } ^ { k }
 +
(- 1) ^ {k - \nu }
 +
\left ( \begin{array}{c}
 +
k \\
 +
\nu
 +
\end{array}
 +
\right )
 +
f ( x + \nu h) \
 +
\right | .
 +
$$
  
 
Then, more generally,
 
Then, more generally,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400038.png" /></td> </tr></table>
+
$$
 +
E _ {n} ( f  )  \leq  C _ {k} \omega _ {k} ( f ; n  ^ {-} 1 ) ,
 +
$$
 +
 
 +
where  $  C _ {k} $
 +
is independent of  $  f $.
 +
The best possible coefficients  $  C _ {k} $
 +
were determined by J. Favard. For the interval  $  [- 1, 1] $
 +
the constant  $  C _ {1} $
 +
is  $  6 $.  
 +
A result of S.B. Stechkin says that
 +
 
 +
$$
 +
\omega _ {k} \left ( f;  {
 +
\frac{1}{n}
 +
} \right )  \leq  \
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400039.png" /> is independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400040.png" />. The best possible coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400041.png" /> were determined by J. Favard. For the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400042.png" /> the constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400043.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400044.png" />. A result of S.B. Stechkin says that
+
\frac{C _ {k} }{n  ^ {k} }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054000/j05400045.png" /></td> </tr></table>
+
\sum _ {i = 0 } ^ { n }
 +
( i + 1) ^ {k - 1 }
 +
E _ {i} ( f  ) .
 +
$$
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E.W. Cheney,  "Introduction to approximation theory" , McGraw-Hill  (1966)  pp. Chapt. 4</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  G.W. Meinardus,  "Approximation von Funktionen und ihre numerische Behandlung" , Springer  (1964)  pp. Chapt. 1, §5</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  T.J. Rivlin,  "An introduction to the approximation of functions" , Dover, reprint  (1981)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E.W. Cheney,  "Introduction to approximation theory" , McGraw-Hill  (1966)  pp. Chapt. 4</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  G.W. Meinardus,  "Approximation von Funktionen und ihre numerische Behandlung" , Springer  (1964)  pp. Chapt. 1, §5</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  T.J. Rivlin,  "An introduction to the approximation of functions" , Dover, reprint  (1981)</TD></TR></table>

Latest revision as of 22:14, 5 June 2020


An inequality estimating the rate of decrease of the best approximation error of a function by trigonometric or algebraic polynomials in dependence on its differentiability and finite-difference properties. Let $ f $ be a $ 2 \pi $- periodic continuous function on the real axis, let $ E _ {n} ( f ) $ be the best uniform approximation error of $ f $ by trigonometric polynomials $ T _ {n} $ of degree $ n $, i.e.

$$ E _ {n} ( f ) = \inf _ {T _ {n} } \max _ { x } | f ( x) - T _ {n} ( x) | , $$

and let

$$ \omega ( f ; \delta ) = \max _ {| t _ {1} - t _ {2} | \leq \delta } | f ( t _ {1} ) - f ( t _ {2} ) | $$

be the modulus of continuity of $ f $( cf. Continuity, modulus of). It was shown by D. Jackson [1] that

$$ \tag{* } E _ {n} ( f ) \leq C \omega \left ( f ; \frac{1}{n} \right ) $$

(where $ C $ is an absolute constant), while if $ f $ has an $ r $- th order continuous derivative $ f ^ { ( r) } $, $ r \geq 1 $, then

$$ E _ {n} ( f ) \leq \frac{C _ {r} }{n ^ {r} } \omega \left ( f ^ { ( r) } ; \frac{1}{n} \right ) , $$

where the constant $ C _ {r} $ depends on $ r $ only. S.N. Bernshtein [3] obtained inequality (*) in an independent manner for the case

$$ \omega ( f ; t ) \leq K t ^ \alpha ,\ \ 0 < \alpha < 1 . $$

If $ f $ is continuous or $ r $ times continuously differentiable on a closed interval $ [ a , b ] $, $ r = 1, 2 \dots $ and if $ E _ {n} ( f ; a , b ) $ is the best uniform approximation error of the function $ f $ on $ [ a , b ] $ by algebraic polynomials of degree $ n $, then, for $ n > r $ one has the relation $ ( f ^ { 0 } = f ) $

$$ E _ {n} ( f ; a , b ) \leq \frac{A _ {r} ( b - a ) ^ {r} }{n ^ {r} } \omega \left ( f ^ { ( r) } ; \frac{b - a }{n} \right ) , $$

where the constant $ A _ {r} $ depends on $ r $ only.

The Jackson inequalities are also known as the Jackson theorems or as direct theorems in the theory of approximation of functions. They may be generalized in various directions: to approximation using an integral metric, to approximation by entire functions of finite order, to an estimate concerning the approximation using a modulus of smoothness of order $ k $, or to a function of several variables. The exact values of the constants in Jackson's inequalities have been determined in several cases.

References

[1] D. Jackson, "Ueber die Genauigkeit der Annäherung stetiger Funktionen durch ganze rationale Funktionen gegebenen Grades und trigonometrische Summen gegebener Ordnung" , Göttingen (1911) (Thesis)
[2] S.M. Nikol'skii, "Approximation of functions of several variables and imbedding theorems" , Springer (1975) (Translated from Russian)
[3] S.N. Bernshtein, "On the best approximation of continuous functions by polynomials of a given degree (1912)" , Collected works , 1 , Moscow (1952) pp. 11–104
[4] N.P. Korneichuk, "Extremal problems in approximation theory" , Moscow (1976) (In Russian)
[5] G.G. Lorentz, "Approximation of functions" , Holt, Rinehart & Winston (1966)

Comments

See also Approximation of functions, direct and inverse theorems.

Let $ \omega _ {k} ( f; \delta ) $ be the modulus of continuity of order $ k $,

$$ \omega _ {k} ( f; \delta ) = \ \sup _ { \begin{array}{c} | h | \leq t \\ x, x + kh \in [ a, b] \end{array} } \ \left | \sum _ {\nu = 0 } ^ { k } (- 1) ^ {k - \nu } \left ( \begin{array}{c} k \\ \nu \end{array} \right ) f ( x + \nu h) \ \right | . $$

Then, more generally,

$$ E _ {n} ( f ) \leq C _ {k} \omega _ {k} ( f ; n ^ {-} 1 ) , $$

where $ C _ {k} $ is independent of $ f $. The best possible coefficients $ C _ {k} $ were determined by J. Favard. For the interval $ [- 1, 1] $ the constant $ C _ {1} $ is $ 6 $. A result of S.B. Stechkin says that

$$ \omega _ {k} \left ( f; { \frac{1}{n} } \right ) \leq \ \frac{C _ {k} }{n ^ {k} } \sum _ {i = 0 } ^ { n } ( i + 1) ^ {k - 1 } E _ {i} ( f ) . $$

References

[a1] E.W. Cheney, "Introduction to approximation theory" , McGraw-Hill (1966) pp. Chapt. 4
[a2] G.W. Meinardus, "Approximation von Funktionen und ihre numerische Behandlung" , Springer (1964) pp. Chapt. 1, §5
[a3] T.J. Rivlin, "An introduction to the approximation of functions" , Dover, reprint (1981)
How to Cite This Entry:
Jackson inequality. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Jackson_inequality&oldid=17333
This article was adapted from an original article by N.P. KorneichukV.P. Motornyi (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article