Namespaces
Variants
Actions

Difference between revisions of "Kolmogorov inequality"

From Encyclopedia of Mathematics
Jump to: navigation, search
(→‎References: Feller: internal link)
m (fix tex)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
Kolmogorov's inequality in approximation theory is a multiplicative inequality between the norms in the spaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k0557101.png" /> of functions and their derivatives on the real axis (or half-line):
+
<!--
 +
k0557101.png
 +
$#A+1 = 46 n = 1
 +
$#C+1 = 46 : ~/encyclopedia/old_files/data/K055/K.0505710 Kolmogorov inequality
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k0557102.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
 +
Kolmogorov's inequality in approximation theory is a multiplicative inequality between the norms in the spaces  $  L _ {s} ( J) $
 +
of functions and their derivatives on the real axis (or half-line):
 +
 
 +
$$
 +
\| x  ^ {(k)} \| _ {L _ {q}  }  \leq  \
 +
C  \| x \| _ {L _ {r}  }  ^  \nu  \cdot
 +
\| x  ^ {(n)}  \| _ {L _ {p}  } ^ {1 - \nu } ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k0557103.png" /></td> </tr></table>
+
$$
 +
0 \leq  k < n,\ \
 +
\nu  = \
  
and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k0557104.png" /> does not depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k0557105.png" />. Such inequalities were first studied by G.H. Hardy (1912), J.E. Littlewood (1912), E. Landau (1913), and J. Hadamard (1914). A.N. Kolmogorov [[#References|[1]]] obtained the best possible constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k0557106.png" /> for the most important case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k0557107.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k0557108.png" /> and arbitrary <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k0557109.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571010.png" />.
+
\frac{n - k - p  ^ {-1} + q  ^ {-1} }{n - p  ^ {-1} + r  ^ {-1} }
 +
,
 +
$$
  
Kolmogorov's inequality is connected with problems of best numerical differentiation and the stable calculation of the (unbounded) operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571011.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571012.png" />-fold differentiation. In fact, the modulus of continuity
+
and $  C $
 +
does not depend on  $  x $.
 +
Such inequalities were first studied by G.H. Hardy (1912), J.E. Littlewood (1912), E. Landau (1913), and J. Hadamard (1914). A.N. Kolmogorov [[#References|[1]]] obtained the best possible constant  $  C $
 +
for the most important case  $  J = ( - \infty , + \infty ) $,
 +
$  p = q = r = \infty $
 +
and arbitrary  $  k $,
 +
$  n $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571013.png" /></td> </tr></table>
+
Kolmogorov's inequality is connected with problems of best numerical differentiation and the stable calculation of the (unbounded) operator  $  D  ^ {k} $
 +
of  $  k $-
 +
fold differentiation. In fact, the modulus of continuity
  
of the operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571014.png" /> on the class <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571015.png" /> is expressed by the formula <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571016.png" />, that is, Kolmogorov's inequality holds with constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571017.png" />.
+
$$
 +
\omega ( \delta )  = \
 +
\sup \
 +
\{ {\| x  ^ {( k)} \| _ {L _ {q}  } } : {
 +
\| x \| _ {L _ {r}  } \leq  \delta ,\
 +
\| x  ^ {( n)} \| _ {L _ {p}  } \leq  1 } \}
 +
$$
 +
 
 +
of the operator $  D  ^ {k} $
 +
on the class $  \{ {x \in L _ {r} } : {\| x  ^ {( n)} \| _ {L _ {p}  } \leq  1 } \} $
 +
is expressed by the formula $  \omega ( \delta ) = \omega ( 1) \delta $,  
 +
that is, Kolmogorov's inequality holds with constant $  C = \omega ( 1) $.
  
 
Kolmogorov's inequality is a special case of inequalities relating to the imbedding of classes of differentiable functions (see [[Imbedding theorems|Imbedding theorems]]).
 
Kolmogorov's inequality is a special case of inequalities relating to the imbedding of classes of differentiable functions (see [[Imbedding theorems|Imbedding theorems]]).
Line 19: Line 59:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.N. Kolmogorov,  "On inequalities between the upper bounds of the successive derivatives of an arbitrary function on an infinite interval"  ''Transl. Amer. Math. Soc. (1)'' , '''2'''  (1962)  pp. 233–243  ''Uchen. Zap. Moskov. Univ. Mat.'' , '''3''' :  30  (1939)  pp. 3–16</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  S.B. Stechkin,  "Best approximation of linear operators"  ''Math. Notes'' , '''1'''  (1967)  pp. 91–99  ''Mat. Zametki'' , '''1''' :  2  (1967)  pp. 137–148</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  L.V. Taikov,  "Kolmogorov-type inequalities and the best formulas for numerical differentiation"  ''Math. Notes'' , '''4'''  (1968)  pp. 631–634  ''Mat. Zametki'' , '''4''' :  2  (1968)  pp. 233–238</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  V.V. Arestov,  "Precise inequalities between norms of functions and their derivatives"  ''Acta Sci. Math.'' , '''33'''  (1972)  pp. 243–267  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.N. Kolmogorov,  "On inequalities between the upper bounds of the successive derivatives of an arbitrary function on an infinite interval"  ''Transl. Amer. Math. Soc. (1)'' , '''2'''  (1962)  pp. 233–243  ''Uchen. Zap. Moskov. Univ. Mat.'' , '''3''' :  30  (1939)  pp. 3–16</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  S.B. Stechkin,  "Best approximation of linear operators"  ''Math. Notes'' , '''1'''  (1967)  pp. 91–99  ''Mat. Zametki'' , '''1''' :  2  (1967)  pp. 137–148</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  L.V. Taikov,  "Kolmogorov-type inequalities and the best formulas for numerical differentiation"  ''Math. Notes'' , '''4'''  (1968)  pp. 631–634  ''Mat. Zametki'' , '''4''' :  2  (1968)  pp. 233–238</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  V.V. Arestov,  "Precise inequalities between norms of functions and their derivatives"  ''Acta Sci. Math.'' , '''33'''  (1972)  pp. 243–267  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
Landau's work [[#References|[a1]]] should be mentioned in particular: he initiated in 1913 a new kind of extremum problem, i.e. obtaining sharp inequalities between the supremum norms of derivatives. The analogue of Kolmogorov's theorem for the half-line <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571018.png" /> has been established by I.J. Schoenberg and A. Cavaretta [[#References|[a3]]]; cf. also [[#References|[a2]]]. For a beautiful survey concerning these problems see [[#References|[a4]]]. Norm inequalities for the difference operator have been studied rather recently; see e.g. [[#References|[a5]]].
+
Landau's work [[#References|[a1]]] should be mentioned in particular: he initiated in 1913 a new kind of extremum problem, i.e. obtaining sharp inequalities between the supremum norms of derivatives. The analogue of Kolmogorov's theorem for the half-line $  \mathbf R _ {+} $
 +
has been established by I.J. Schoenberg and A. Cavaretta [[#References|[a3]]]; cf. also [[#References|[a2]]]. For a beautiful survey concerning these problems see [[#References|[a4]]]. Norm inequalities for the difference operator have been studied rather recently; see e.g. [[#References|[a5]]].
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E. Landau,  "Einige Ungleichungen für zweimal differentierbare Funktionen"  ''Proc. London Math. Soc.'' , '''13'''  (1913)  pp. 43–49</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  H.G. ter Morsche,  "Interpolation and extremal properties of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571019.png" />-spline functions" , Univ. Eindhoven  (1982)  (Thesis)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  I.J. Schoenberg,  A. Cavaretta,  "Solution of Landau's problem concerning higher derivatives on the halfline" , ''Proc. Internat. Conf. Constructive Function Theory'' , Bulgarian Acad. Sci.  (1972)  pp. 297–308</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  I.J. Schoenberg,  "The elementary cases of Landau's problem of inequalities between derivatives"  ''Amer. Math. Monthly'' , '''80'''  (1973)  pp. 121–158</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  H.G. Kaper,  B.E. Spellman,  "Best constants in norm inequalities for the difference operator"  ''Trans. Amer. Math. Soc.'' , '''299'''  (1987)  pp. 351–372</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E. Landau,  "Einige Ungleichungen für zweimal differentierbare Funktionen"  ''Proc. London Math. Soc.'' , '''13'''  (1913)  pp. 43–49</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  H.G. ter Morsche,  "Interpolation and extremal properties of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571019.png" />-spline functions" , Univ. Eindhoven  (1982)  (Thesis)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  I.J. Schoenberg,  A. Cavaretta,  "Solution of Landau's problem concerning higher derivatives on the halfline" , ''Proc. Internat. Conf. Constructive Function Theory'' , Bulgarian Acad. Sci.  (1972)  pp. 297–308</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  I.J. Schoenberg,  "The elementary cases of Landau's problem of inequalities between derivatives"  ''Amer. Math. Monthly'' , '''80'''  (1973)  pp. 121–158</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  H.G. Kaper,  B.E. Spellman,  "Best constants in norm inequalities for the difference operator"  ''Trans. Amer. Math. Soc.'' , '''299'''  (1987)  pp. 351–372</TD></TR></table>
  
Kolmogorov's inequality in probability theory is an inequality for the maximum of sums of independent random variables. It is a generalization of the classical [[Chebyshev inequality in probability theory|Chebyshev inequality in probability theory]]. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571020.png" /> be independent random variables with finite mathematical expectations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571021.png" /> and variances <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571022.png" />. Then, for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571023.png" />,
+
Kolmogorov's inequality in probability theory is an inequality for the maximum of sums of independent random variables. It is a generalization of the classical [[Chebyshev inequality in probability theory|Chebyshev inequality in probability theory]]. Let $  X _ {1} \dots X _ {n} $
 +
be independent random variables with finite mathematical expectations $  a _ {n} = {\mathsf E} X _ {n} $
 +
and variances $  \sigma _ {n}  ^ {2} = {\mathsf D} X _ {n} $.  
 +
Then, for any $  \epsilon > 0 $,
 +
 
 +
$$
 +
{\mathsf P}
 +
\left \{
 +
\max _ {1 \leq  m \leq  n } \
 +
\left |
 +
\sum _ { k=1 } ^ { m }  ( X _ {k} - a _ {k} ) \
 +
\right |
 +
\geq  \epsilon \right \}
 +
\leq 
 +
\frac{1}{\epsilon  ^ {2} }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571024.png" /></td> </tr></table>
+
\sum _ { k=1 } ^ { n }  \sigma _ {k}  ^ {2} ,
 +
$$
  
and if the variables are bounded (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571025.png" /> with probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571026.png" />), then
+
and if the variables are bounded ( $  | X _ {i} | \leq  c $
 +
with probability $  1 $),  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571027.png" /></td> </tr></table>
+
$$
 +
1 -
 +
\frac{( \epsilon + 2 c )  ^ {2} }{\sum _ { k=1 } ^ { n }  \sigma _ {k}  ^ {2} }
 +
 
 +
\leq  {\mathsf P}
 +
\left \{
 +
\max _ {1 \leq  m \leq  n } \
 +
\left |
 +
\sum _ { k=1 } ^ { m }  ( X _ {k} - \sigma _ {k} ) \
 +
\right |
 +
\geq  \epsilon \right \} .
 +
$$
  
 
These inequalities were established by A.N. Kolmogorov . Kolmogorov's inequality is remarkable both as to its proof (the method of argument in probability theory is new) and in the result itself (the estimate for the maximum of the sums is the same as that for the last sum in the Chebyshev inequality).
 
These inequalities were established by A.N. Kolmogorov . Kolmogorov's inequality is remarkable both as to its proof (the method of argument in probability theory is new) and in the result itself (the estimate for the maximum of the sums is the same as that for the last sum in the Chebyshev inequality).
  
In the proof of the Kolmogorov inequality essential use is made of the properties of conditional mathematical expectations of functions and sums <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571028.png" /> under the condition that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571029.png" /> are fixed. These properties arise from the independence of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571030.png" />.
+
In the proof of the Kolmogorov inequality essential use is made of the properties of conditional mathematical expectations of functions and sums $  S _ {k+p} $
 +
under the condition that $  X _ {1} \dots X _ {k} $
 +
are fixed. These properties arise from the independence of the $  X _ {k} $.
  
 
The Kolmogorov inequality admits numerous generalizations, of which one can mention the following.
 
The Kolmogorov inequality admits numerous generalizations, of which one can mention the following.
  
1) The Kolmogorov inequality remains true if the condition of independence of the variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571031.png" /> is replaced by the condition that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571032.png" /> is an [[Absolutely-unbiased sequence|absolutely-unbiased sequence]], that is, if the sequence of sums <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571033.png" /> forms a [[Martingale|martingale]].
+
1) The Kolmogorov inequality remains true if the condition of independence of the variables $  X _ {n} $
 +
is replaced by the condition that $  \{ X _ {n} \} $
 +
is an [[Absolutely-unbiased sequence|absolutely-unbiased sequence]], that is, if the sequence of sums $  S _ {n} = \sum _ {k=1}  ^ {n} X _ {k} $
 +
forms a [[Martingale|martingale]].
  
2) If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571034.png" /> is a convex monotone function and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571035.png" />, then
+
2) If $  g ( t) \geq  0 $
 +
is a convex monotone function and $  {\mathsf E} g ( | S _ {n} | ) < \infty $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571036.png" /></td> </tr></table>
+
$$
 +
{\mathsf P}
 +
\left \{
 +
\max _ {1 \leq  m \leq  n } \
 +
| S _ {m} | \geq  \epsilon \right \}
 +
\leq  {\mathsf E} g ( | S _ {n} | )
  
3) If the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571037.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571038.png" />, are symmetric about the origin, then
+
\frac{1}{g ( \epsilon ) }
 +
.
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571039.png" /></td> </tr></table>
+
3) If the  $  X _ {k} $,
 +
$  k = 1 \dots n $,
 +
are symmetric about the origin, then
 +
 
 +
$$
 +
{\mathsf P}
 +
\left \{
 +
\max _ {1 \leq  m \leq  n } \
 +
| S _ {m} | \geq  \epsilon \right \}
 +
\leq  2 {\mathsf P} \{ | S _ {n} | \geq  \epsilon \} ;
 +
$$
  
 
see [[Lévy inequality|Lévy inequality]].
 
see [[Lévy inequality|Lévy inequality]].
  
4) For arbitrary independent random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571040.png" />
+
4) For arbitrary independent random variables $  X _ {1} , X _ {2} \dots $
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571041.png" /></td> </tr></table>
+
$$
 +
{\mathsf P}
 +
\left \{
 +
\max _ {1 \leq  m \leq  n } \
 +
| S _ {m} | > \delta + \epsilon \right \}
 +
\leq 
 +
\frac{1}{1 - p }
  
provided that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571042.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571043.png" /> are chosen so that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571044.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571045.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571046.png" />.
+
{\mathsf P} \{ | S _ {n} | > \epsilon \} ,
 +
$$
 +
 
 +
provided that $  \delta > 0 $
 +
and $  0 < p < 1 $
 +
are chosen so that $  {\mathsf P} \{ | S _ {m} | > \delta \} \leq  p $
 +
for all $  m $,  
 +
$  1 \leq  m \leq  n $.
  
 
The following property of sums of independent variables can be seen in all versions of Kolmogorov's inequality: The  "spread"  of the maximum sum has the same order of magnitude as the  "spread"  of the final sum.
 
The following property of sums of independent variables can be seen in all versions of Kolmogorov's inequality: The  "spread"  of the maximum sum has the same order of magnitude as the  "spread"  of the final sum.
  
Just as the Chebyshev inequality is applied in the derivation of the [[Law of large numbers|law of large numbers]], so the Kolmogorov inequality is applied in the proof of the [[Strong law of large numbers|strong law of large numbers]] (the Kolmogorov criterion for convergence of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055710/k05571047.png" /> almost-everywhere). The proof of convergence theorems for series of random variables is based on the use of the Kolmogorov inequality.
+
Just as the Chebyshev inequality is applied in the derivation of the [[Law of large numbers|law of large numbers]], so the Kolmogorov inequality is applied in the proof of the [[Strong law of large numbers|strong law of large numbers]] (the Kolmogorov criterion for convergence of $  S _ {n} / n \rightarrow 0 $
 +
almost-everywhere). The proof of convergence theorems for series of random variables is based on the use of the Kolmogorov inequality.
  
 
====References====
 
====References====
Line 70: Line 175:
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  W. Feller, [[Feller, "An introduction to probability theory and its  applications"|"An introduction to probability theory and its  applications"]], '''2''' , Wiley  (1966)  pp. Sect. VII.8</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  W. Feller, [[Feller, "An introduction to probability theory and its  applications"|"An introduction to probability theory and its  applications"]], '''2''' , Wiley  (1966)  pp. Sect. VII.8</TD></TR></table>

Latest revision as of 19:16, 5 January 2021


Kolmogorov's inequality in approximation theory is a multiplicative inequality between the norms in the spaces $ L _ {s} ( J) $ of functions and their derivatives on the real axis (or half-line):

$$ \| x ^ {(k)} \| _ {L _ {q} } \leq \ C \| x \| _ {L _ {r} } ^ \nu \cdot \| x ^ {(n)} \| _ {L _ {p} } ^ {1 - \nu } , $$

where

$$ 0 \leq k < n,\ \ \nu = \ \frac{n - k - p ^ {-1} + q ^ {-1} }{n - p ^ {-1} + r ^ {-1} } , $$

and $ C $ does not depend on $ x $. Such inequalities were first studied by G.H. Hardy (1912), J.E. Littlewood (1912), E. Landau (1913), and J. Hadamard (1914). A.N. Kolmogorov [1] obtained the best possible constant $ C $ for the most important case $ J = ( - \infty , + \infty ) $, $ p = q = r = \infty $ and arbitrary $ k $, $ n $.

Kolmogorov's inequality is connected with problems of best numerical differentiation and the stable calculation of the (unbounded) operator $ D ^ {k} $ of $ k $- fold differentiation. In fact, the modulus of continuity

$$ \omega ( \delta ) = \ \sup \ \{ {\| x ^ {( k)} \| _ {L _ {q} } } : { \| x \| _ {L _ {r} } \leq \delta ,\ \| x ^ {( n)} \| _ {L _ {p} } \leq 1 } \} $$

of the operator $ D ^ {k} $ on the class $ \{ {x \in L _ {r} } : {\| x ^ {( n)} \| _ {L _ {p} } \leq 1 } \} $ is expressed by the formula $ \omega ( \delta ) = \omega ( 1) \delta $, that is, Kolmogorov's inequality holds with constant $ C = \omega ( 1) $.

Kolmogorov's inequality is a special case of inequalities relating to the imbedding of classes of differentiable functions (see Imbedding theorems).

References

[1] A.N. Kolmogorov, "On inequalities between the upper bounds of the successive derivatives of an arbitrary function on an infinite interval" Transl. Amer. Math. Soc. (1) , 2 (1962) pp. 233–243 Uchen. Zap. Moskov. Univ. Mat. , 3 : 30 (1939) pp. 3–16
[2] S.B. Stechkin, "Best approximation of linear operators" Math. Notes , 1 (1967) pp. 91–99 Mat. Zametki , 1 : 2 (1967) pp. 137–148
[3] L.V. Taikov, "Kolmogorov-type inequalities and the best formulas for numerical differentiation" Math. Notes , 4 (1968) pp. 631–634 Mat. Zametki , 4 : 2 (1968) pp. 233–238
[4] V.V. Arestov, "Precise inequalities between norms of functions and their derivatives" Acta Sci. Math. , 33 (1972) pp. 243–267 (In Russian)

Comments

Landau's work [a1] should be mentioned in particular: he initiated in 1913 a new kind of extremum problem, i.e. obtaining sharp inequalities between the supremum norms of derivatives. The analogue of Kolmogorov's theorem for the half-line $ \mathbf R _ {+} $ has been established by I.J. Schoenberg and A. Cavaretta [a3]; cf. also [a2]. For a beautiful survey concerning these problems see [a4]. Norm inequalities for the difference operator have been studied rather recently; see e.g. [a5].

References

[a1] E. Landau, "Einige Ungleichungen für zweimal differentierbare Funktionen" Proc. London Math. Soc. , 13 (1913) pp. 43–49
[a2] H.G. ter Morsche, "Interpolation and extremal properties of -spline functions" , Univ. Eindhoven (1982) (Thesis)
[a3] I.J. Schoenberg, A. Cavaretta, "Solution of Landau's problem concerning higher derivatives on the halfline" , Proc. Internat. Conf. Constructive Function Theory , Bulgarian Acad. Sci. (1972) pp. 297–308
[a4] I.J. Schoenberg, "The elementary cases of Landau's problem of inequalities between derivatives" Amer. Math. Monthly , 80 (1973) pp. 121–158
[a5] H.G. Kaper, B.E. Spellman, "Best constants in norm inequalities for the difference operator" Trans. Amer. Math. Soc. , 299 (1987) pp. 351–372

Kolmogorov's inequality in probability theory is an inequality for the maximum of sums of independent random variables. It is a generalization of the classical Chebyshev inequality in probability theory. Let $ X _ {1} \dots X _ {n} $ be independent random variables with finite mathematical expectations $ a _ {n} = {\mathsf E} X _ {n} $ and variances $ \sigma _ {n} ^ {2} = {\mathsf D} X _ {n} $. Then, for any $ \epsilon > 0 $,

$$ {\mathsf P} \left \{ \max _ {1 \leq m \leq n } \ \left | \sum _ { k=1 } ^ { m } ( X _ {k} - a _ {k} ) \ \right | \geq \epsilon \right \} \leq \frac{1}{\epsilon ^ {2} } \sum _ { k=1 } ^ { n } \sigma _ {k} ^ {2} , $$

and if the variables are bounded ( $ | X _ {i} | \leq c $ with probability $ 1 $), then

$$ 1 - \frac{( \epsilon + 2 c ) ^ {2} }{\sum _ { k=1 } ^ { n } \sigma _ {k} ^ {2} } \leq {\mathsf P} \left \{ \max _ {1 \leq m \leq n } \ \left | \sum _ { k=1 } ^ { m } ( X _ {k} - \sigma _ {k} ) \ \right | \geq \epsilon \right \} . $$

These inequalities were established by A.N. Kolmogorov . Kolmogorov's inequality is remarkable both as to its proof (the method of argument in probability theory is new) and in the result itself (the estimate for the maximum of the sums is the same as that for the last sum in the Chebyshev inequality).

In the proof of the Kolmogorov inequality essential use is made of the properties of conditional mathematical expectations of functions and sums $ S _ {k+p} $ under the condition that $ X _ {1} \dots X _ {k} $ are fixed. These properties arise from the independence of the $ X _ {k} $.

The Kolmogorov inequality admits numerous generalizations, of which one can mention the following.

1) The Kolmogorov inequality remains true if the condition of independence of the variables $ X _ {n} $ is replaced by the condition that $ \{ X _ {n} \} $ is an absolutely-unbiased sequence, that is, if the sequence of sums $ S _ {n} = \sum _ {k=1} ^ {n} X _ {k} $ forms a martingale.

2) If $ g ( t) \geq 0 $ is a convex monotone function and $ {\mathsf E} g ( | S _ {n} | ) < \infty $, then

$$ {\mathsf P} \left \{ \max _ {1 \leq m \leq n } \ | S _ {m} | \geq \epsilon \right \} \leq {\mathsf E} g ( | S _ {n} | ) \frac{1}{g ( \epsilon ) } . $$

3) If the $ X _ {k} $, $ k = 1 \dots n $, are symmetric about the origin, then

$$ {\mathsf P} \left \{ \max _ {1 \leq m \leq n } \ | S _ {m} | \geq \epsilon \right \} \leq 2 {\mathsf P} \{ | S _ {n} | \geq \epsilon \} ; $$

see Lévy inequality.

4) For arbitrary independent random variables $ X _ {1} , X _ {2} \dots $

$$ {\mathsf P} \left \{ \max _ {1 \leq m \leq n } \ | S _ {m} | > \delta + \epsilon \right \} \leq \frac{1}{1 - p } {\mathsf P} \{ | S _ {n} | > \epsilon \} , $$

provided that $ \delta > 0 $ and $ 0 < p < 1 $ are chosen so that $ {\mathsf P} \{ | S _ {m} | > \delta \} \leq p $ for all $ m $, $ 1 \leq m \leq n $.

The following property of sums of independent variables can be seen in all versions of Kolmogorov's inequality: The "spread" of the maximum sum has the same order of magnitude as the "spread" of the final sum.

Just as the Chebyshev inequality is applied in the derivation of the law of large numbers, so the Kolmogorov inequality is applied in the proof of the strong law of large numbers (the Kolmogorov criterion for convergence of $ S _ {n} / n \rightarrow 0 $ almost-everywhere). The proof of convergence theorems for series of random variables is based on the use of the Kolmogorov inequality.

References

[1a] A.N. Kolmogorov, "Ueber die Summen durch den Zufall bestimmter unabhängiger Grössen" Math. Ann. , 99 (1928) pp. 309–319
[1b] A.N. Kolmogorov, "Bemerkungen zur meiner Arbeit "Ueber die Summen durch den Zufall bestimmter unabhängiger Grössen" " Math. Ann. , 102 (1929) pp. 484–488
[2] A.N. Kolmogorov, "Foundations of the theory of probability" , Chelsea, reprint (1950) (Translated from German)
[3] M. Loève, "Probability theory" , Princeton Univ. Press (1963)

A.V. Prokhorov

Comments

References

[a1] W. Feller, "An introduction to probability theory and its applications", 2 , Wiley (1966) pp. Sect. VII.8
How to Cite This Entry:
Kolmogorov inequality. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Kolmogorov_inequality&oldid=25524
This article was adapted from an original article by Yu.N. Subbotin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article