Namespaces
Variants
Actions

Difference between revisions of "Limit theorems"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (mr, zbl)
m (tex encoded by computer)
 
(3 intermediate revisions by one other user not shown)
Line 1: Line 1:
 +
<!--
 +
l0589201.png
 +
$#A+1 = 93 n = 0
 +
$#C+1 = 93 : ~/encyclopedia/old_files/data/L058/L.0508920 Limit theorems
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''in probability theory''
 
''in probability theory''
  
Line 5: Line 17:
 
[[Category:Limit theorems]]
 
[[Category:Limit theorems]]
  
A general name for a number of theorems in probability theory that give conditions for the appearance of some regularity as the result of the action of a large number of random sources. The first limit theorems, established by J. Bernoulli (1713) and P. Laplace (1812), are related to the distribution of the deviation of the frequency <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l0589201.png" /> of appearance of some event <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l0589202.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l0589203.png" /> independent trials from its probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l0589204.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l0589205.png" /> (exact statements can be found in the articles [[Bernoulli theorem|Bernoulli theorem]]; [[Laplace theorem|Laplace theorem]]). S. Poisson (1837) generalized these theorems to the case when the probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l0589206.png" /> of appearance of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l0589207.png" /> in the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l0589208.png" />-th trial depends on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l0589209.png" />, by writing down the limiting behaviour, as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892010.png" />, of the distribution of the deviation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892011.png" /> from the arithmetic mean <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892012.png" /> of the probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892013.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892014.png" /> (cf. [[Poisson theorem|Poisson theorem]]). If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892015.png" /> denotes the random variable that takes the value 1 if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892016.png" /> appears in the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892017.png" />-th trial and the value 0 when the opposite event appears, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892018.png" /> can be expressed as the sum
+
A general name for a number of theorems in probability theory that give conditions for the appearance of some regularity as the result of the action of a large number of random sources. The first limit theorems, established by J. Bernoulli (1713) and P. Laplace (1812), are related to the distribution of the deviation of the frequency $  \mu _ {n} /n $
 +
of appearance of some event $  E $
 +
in $  n $
 +
independent trials from its probability $  p $,
 +
$  0 < p < 1 $(
 +
exact statements can be found in the articles [[Bernoulli theorem|Bernoulli theorem]]; [[Laplace theorem|Laplace theorem]]). S. Poisson (1837) generalized these theorems to the case when the probability $  p _ {k} $
 +
of appearance of $  E $
 +
in the $  k $-
 +
th trial depends on $  k $,  
 +
by writing down the limiting behaviour, as $  n \rightarrow \infty $,  
 +
of the distribution of the deviation of $  \mu _ {n} /n $
 +
from the arithmetic mean $  \overline{p}\; = ( \sum _ {k = 1 }  ^ {n} p _ {k} )/n $
 +
of the probabilities $  p _ {k} $,  
 +
$  1 \leq  k \leq  n $(
 +
cf. [[Poisson theorem|Poisson theorem]]). If $  X _ {k} $
 +
denotes the random variable that takes the value 1 if $  E $
 +
appears in the $  k $-
 +
th trial and the value 0 when the opposite event appears, then $  \mu _ {n} $
 +
can be expressed as the sum
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892019.png" /></td> </tr></table>
+
$$
 +
\mu _ {n}  = \
 +
X _ {1} + \dots + X _ {n} ,
 +
$$
  
 
which makes it possible to regard the theorems mentioned above as particular cases of two more general statements related to sums of independent random variables — the [[Law of large numbers|law of large numbers]] and the [[Central limit theorem|central limit theorem]] (these are given in their classical forms below).
 
which makes it possible to regard the theorems mentioned above as particular cases of two more general statements related to sums of independent random variables — the [[Law of large numbers|law of large numbers]] and the [[Central limit theorem|central limit theorem]] (these are given in their classical forms below).
Line 14: Line 47:
 
Let
 
Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892020.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
X _ {1} , X _ {2} \dots
 +
$$
  
be a sequence of independent random variables, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892021.png" /> be the sum of the first <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892022.png" /> elements of this sequence,
+
be a sequence of independent random variables, let $  s _ {n} $
 +
be the sum of the first $  n $
 +
elements of this sequence,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892023.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
s _ {n}  = \
 +
X _ {1} + \dots + X _ {n} ,
 +
$$
  
let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892024.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892025.png" /> be, respectively, the [[Mathematical expectation|mathematical expectation]],
+
let $  A _ {n} $
 +
and $  B _ {n}  ^ {2} $
 +
be, respectively, the [[Mathematical expectation|mathematical expectation]],
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892026.png" /></td> </tr></table>
+
$$
 +
A _ {n}  = \
 +
{\mathsf E} s _ {n}  = \
 +
{\mathsf E} X _ {1} + \dots +
 +
{\mathsf E} X _ {n} ,
 +
$$
  
 
and variance (cf. [[Dispersion|Dispersion]]),
 
and variance (cf. [[Dispersion|Dispersion]]),
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892027.png" /></td> </tr></table>
+
$$
 +
B _ {n}  ^ {2}  = \
 +
{\mathsf D} s _ {n}  = \
 +
{\mathsf D} X _ {1} + \dots +
 +
{\mathsf D} X _ {n} ,
 +
$$
  
of the sum <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892028.png" />. One says that the sequence (1) is subject to the law of large numbers if, for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892029.png" />, the probability of the inequality
+
of the sum $  s _ {n} $.  
 +
One says that the sequence (1) is subject to the law of large numbers if, for any $  \epsilon > 0 $,  
 +
the probability of the inequality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892030.png" /></td> </tr></table>
+
$$
 +
\left |
 +
{
 +
\frac{s _ {n} }{n}
 +
} -
 +
{
 +
\frac{A _ {n} }{n}
 +
} \
 +
\right |  > \epsilon
 +
$$
  
tends to zero as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892031.png" />.
+
tends to zero as $  n \rightarrow \infty $.
  
Very general conditions for the law of large numbers to be applicable were found first by P.L. Chebyshev (1867) and were later generalized by A.A. Markov (1906). The problem of necessary and sufficient conditions for the law of large numbers to be applicable was exhaustively treated by A.N. Kolmogorov (1928). If all random variables have the same distribution function, then these conditions reduce to one: the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892032.png" /> must have finite mathematical expectation (this was shown by A.Ya. Khinchin in 1929).
+
Very general conditions for the law of large numbers to be applicable were found first by P.L. Chebyshev (1867) and were later generalized by A.A. Markov (1906). The problem of necessary and sufficient conditions for the law of large numbers to be applicable was exhaustively treated by A.N. Kolmogorov (1928). If all random variables have the same distribution function, then these conditions reduce to one: the $  X _ {n} $
 +
must have finite mathematical expectation (this was shown by A.Ya. Khinchin in 1929).
  
 
==The central limit theorem.==
 
==The central limit theorem.==
One says that the central limit theorem holds for a sequence (1) if for arbitrary <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892033.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892034.png" /> the probability of the inequality
+
One says that the central limit theorem holds for a sequence (1) if for arbitrary $  z _ {1} $
 +
and $  z _ {2} $
 +
the probability of the inequality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892035.png" /></td> </tr></table>
+
$$
 +
z _ {1} B _ {n}  < \
 +
s _ {n} - A _ {n}  < \
 +
z _ {2} B _ {n}  $$
  
has as limit, as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892036.png" />, the quantity
+
has as limit, as $  n \rightarrow \infty $,  
 +
the quantity
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892037.png" /></td> </tr></table>
+
$$
 +
\Phi ( z _ {2} ) -
 +
\Phi ( z _ {1} ),
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892038.png" /></td> </tr></table>
+
$$
 +
\Phi ( z)  = \
 +
{
 +
\frac{1}{\sqrt {2 \pi } }
 +
}
 +
\int\limits _ {- \infty } ^ { z }
 +
e ^ {- x  ^ {2} /2 }  dx
 +
$$
  
 
(cf. [[Normal distribution|Normal distribution]]). Rather general sufficient conditions for the central limit theorem to hold were indicated by Chebyshev (1887); however, his proofs contained gaps, which were filled in somewhat later by Markov (1898). A solution of the problem which is nearly final was obtained by A.M. Lyapunov (1901). The exact formulation of Lyapunov's theorem is: Suppose
 
(cf. [[Normal distribution|Normal distribution]]). Rather general sufficient conditions for the central limit theorem to hold were indicated by Chebyshev (1887); however, his proofs contained gaps, which were filled in somewhat later by Markov (1898). A solution of the problem which is nearly final was obtained by A.M. Lyapunov (1901). The exact formulation of Lyapunov's theorem is: Suppose
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892039.png" /></td> </tr></table>
+
$$
 +
c _ {k}  = {\mathsf E}
 +
| X _ {k} - {\mathsf E} X _ {k} | ^ {2 + \delta } ,\ \
 +
\delta > 0,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892040.png" /></td> </tr></table>
+
$$
 +
C _ {n}  = c _ {1} + \dots + c _ {n} .
 +
$$
  
If the ratio <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892041.png" /> tends to zero as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892042.png" />, then the central limit theorem holds for (1). The final solution to the problem of conditions of applicability of the central limit theorem was obtained, in general outline, by S.N. Bernstein [S.N. Bernshtein] (1926) and was completed by W. Feller (1935). Under the conditions of the central limit theorem the relative accuracy of approximation of the probability of an inequality of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892043.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892044.png" /> grows unboundedly as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892045.png" /> tends to infinity, by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892046.png" /> can be very low. Correction factors necessary in order to increase the accuracy are indicated in limit theorems for probabilities of large deviations (cf. [[Probability of large deviations|Probability of large deviations]]; [[Cramér theorem|Cramér theorem]]). This question was studied, following H. Cramér and Feller, by Yu.V. Linnik and others. Typical results related to this branch of the subject are most conveniently explained using the example of the sums (2) of independent identically-distributed random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892047.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892048.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892049.png" />. In this case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892050.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892051.png" />.
+
If the ratio $  L _ {n} = C _ {n} /B _ {n} ^ {2 + \delta } $
 +
tends to zero as $  n \rightarrow \infty $,  
 +
then the central limit theorem holds for (1). The final solution to the problem of conditions of applicability of the central limit theorem was obtained, in general outline, by S.N. Bernstein [S.N. Bernshtein] (1926) and was completed by W. Feller (1935). Under the conditions of the central limit theorem the relative accuracy of approximation of the probability of an inequality of the form $  S _ {n} - A _ {n} > z _ {n} B _ {n} $,  
 +
where $  z _ {n} $
 +
grows unboundedly as $  n $
 +
tends to infinity, by $  1 - \Phi _ {n} ( z _ {n} ) $
 +
can be very low. Correction factors necessary in order to increase the accuracy are indicated in limit theorems for probabilities of large deviations (cf. [[Probability of large deviations|Probability of large deviations]]; [[Cramér theorem|Cramér theorem]]). This question was studied, following H. Cramér and Feller, by Yu.V. Linnik and others. Typical results related to this branch of the subject are most conveniently explained using the example of the sums (2) of independent identically-distributed random variables $  X _ {1} , X _ {2} \dots $
 +
with $  {\mathsf E} X _ {j} = 0 $
 +
and $  {\mathsf D} X _ {j} = 1 $.  
 +
In this case $  A _ {n} = 0 $,  
 +
$  B _ {n} = \sqrt n $.
  
 
Consider, e.g., the probability of the inequality
 
Consider, e.g., the probability of the inequality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892052.png" /></td> </tr></table>
+
$$
 +
s _ {n}  \geq  \
 +
z _ {n} \sqrt n ,
 +
$$
 +
 
 +
which equals  $  1 - F _ {n} ( z _ {n} ) $,
 +
where  $  F _ {n} ( z _ {n} ) $
 +
is the distribution function of the variable  $  s _ {n} / \sqrt n $,
 +
and for fixed  $  z _ {n} = z $
 +
as  $  n \rightarrow \infty $,
 +
 
 +
$$ \tag{3 }
 +
1 - F _ {n} ( z)  \rightarrow \
 +
1 - \Phi ( z).
 +
$$
 +
 
 +
If  $  z _ {n} $
 +
depends on  $  n $
 +
and is moreover such that  $  z _ {n} \rightarrow \infty $
 +
as  $  n \rightarrow \infty $,
 +
then
  
which equals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892053.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892054.png" /> is the distribution function of the variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892055.png" />, and for fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892056.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892057.png" />,
+
$$
 +
1 - F _ {n} ( z _ {n} )  \rightarrow  0 \ \
 +
\textrm{ and } \ \
 +
1 - \Phi ( z _ {n} )  \rightarrow  0
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892058.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
and formula (3) is useless. It is necessary to obtain bounds on the relative accuracy of approximation, i.e. for the ratio of  $  1 - F ( z _ {n} ) $
 +
to  $  1 - \Phi ( z _ {n} ) $.
 +
In particular, there naturally arises the question of conditions under which
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892059.png" /> depends on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892060.png" /> and is moreover such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892061.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892062.png" />, then
+
$$ \tag{4 }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892063.png" /></td> </tr></table>
+
\frac{1 - F _ {n} ( z _ {n} ) }{1 - \Phi ( z _ {n} ) }
 +
  \rightarrow  1
 +
$$
  
and formula (3) is useless. It is necessary to obtain bounds on the relative accuracy of approximation, i.e. for the ratio of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892064.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892065.png" />. In particular, there naturally arises the question of conditions under which
+
as  $  z _ {n} \rightarrow \infty $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892066.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
Relation (4) holds for  $  z _ {n} $
 +
of arbitrary growth only if the summands have a normal distribution (this conclusion is valid as soon as  $  z _ {n} $
 +
is of order greater than  $  \sqrt n $).
 +
If the summands are not normal, then relation (4) can hold in certain zones, whose orders do not exceed  $  \sqrt n $. 
 +
"Smaller" zones (of logarithmic order) are obtained under the condition that a certain number of moments is finite. Moreover, under definite  "regularity" conditions on the densities of the summands the  "normality" asymptotics imply power asymptotics. E.g., if the density of the summands is
  
as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892067.png" />.
+
$$
 +
{
 +
\frac{2} \pi
 +
}
 +
{
 +
\frac{1}{( 1 + z  ^ {2} )  ^ {2} }
 +
} ,
 +
$$
  
Relation (4) holds for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892068.png" /> of arbitrary growth only if the summands have a normal distribution (this conclusion is valid as soon as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892069.png" /> is of order greater than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892070.png" />). If the summands are not normal, then relation (4) can hold in certain zones, whose orders do not exceed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892071.png" />. "Smaller" zones (of logarithmic order) are obtained under the condition that a certain number of moments is finite. Moreover, under definite  "regularity"  conditions on the densities of the summands the "normality" asymptotics imply power asymptotics. E.g., if the density of the summands is
+
then, uniformly in  $ z $,  
 +
as $ n \rightarrow \infty $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892072.png" /></td> </tr></table>
+
$$
 +
{\mathsf P}
 +
\left \{
  
then, uniformly in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892073.png" />, as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892074.png" />,
+
\frac{s _ {n} }{\sqrt n }
 +
\geq  z
 +
\right \}  \sim \
 +
1 - \Phi ( z) +
 +
{
 +
\frac{2}{3 \pi }
 +
}
 +
{
 +
\frac{1}{\sqrt n }
 +
}
 +
{
 +
\frac{1}{z  ^ {3} }
 +
} .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892075.png" /></td> </tr></table>
+
Taking into account that as  $  z \rightarrow \infty $,
  
Taking into account that as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892076.png" />,
+
$$
 +
1 - \Phi ( z)  \sim \
 +
{
 +
\frac{1}{\sqrt {2 \pi } z }
 +
}
 +
e ^ {- z  ^ {2} /2 } ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892077.png" /></td> </tr></table>
+
it is easy to convince oneself that (4) holds. The extension to zones of power order (of the form  $  n  ^  \alpha  $,
 +
$  \alpha < 1/2 $),
 +
requires that the condition
  
it is easy to convince oneself that (4) holds. The extension to zones of power order (of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892078.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892079.png" />), requires that the condition
+
$$ \tag{5 }
 +
{\mathsf E} e ^
 +
{| X _ {j} |
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892080.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
\frac{4 \alpha }{2 \alpha + 1 }
 +
< \infty
 +
$$
  
should hold, as well as that a certain number (depending on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892081.png" />) of moments of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892082.png" /> coincide with corresponding moments of the normal distribution. If the latter condition (on the coincidence of moments) is not satisfied, then the ratio at the left-hand side of (4) can be described by a Cramér series (under the so-called Cramér condition, cf. [[Cramér theorem|Cramér theorem]]) or initial partial sums of it (under a condition of the type (5)).
+
should hold, as well as that a certain number (depending on $  \alpha $)  
 +
of moments of $  X _ {j} $
 +
coincide with corresponding moments of the normal distribution. If the latter condition (on the coincidence of moments) is not satisfied, then the ratio at the left-hand side of (4) can be described by a Cramér series (under the so-called Cramér condition, cf. [[Cramér theorem|Cramér theorem]]) or initial partial sums of it (under a condition of the type (5)).
  
 
Estimates of probabilities of large deviations are used in mathematical statistics, statistical physics, etc.
 
Estimates of probabilities of large deviations are used in mathematical statistics, statistical physics, etc.
Line 99: Line 264:
 
1) Research, initiated by Markov and continued by Bernstein and others, on conditions under which the law of large numbers and the central limit theorem hold for sums of dependent random variables.
 
1) Research, initiated by Markov and continued by Bernstein and others, on conditions under which the law of large numbers and the central limit theorem hold for sums of dependent random variables.
  
2) Even in the case of sequences of identically-distributed random variables one can exhibit simple examples when  "normalized"  (i.e. subjected to a certain linear transformation) sums <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892083.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892084.png" /> constants, have a limit distribution different from a normal one (non-degenerate distributions, i.e. distributions not concentrated at a single point, are meant) (cf. [[Stable distribution|Stable distribution]]). In the work of Khinchin, B.V. Gnedenko, P. Lévy, W. Doeblin, and others, both the class of possible distributions for sums of independent random variables and conditions of convergence of the distributions of sums to some limit distribution (in triangular arrays (cf. [[Triangular array|Triangular array]]) of random variables one imposes here the condition of [[Asymptotic negligibility|asymptotic negligibility]] of the summands) have been completely studied. (Cf. [[Infinitely-divisible distribution|Infinitely-divisible distribution]]; [[Stochastic process with independent increments|Stochastic process with independent increments]].)
+
2) Even in the case of sequences of identically-distributed random variables one can exhibit simple examples when  "normalized"  (i.e. subjected to a certain linear transformation) sums $  ( s _ {n} - a _ {n} ) / b _ {n} $,
 +
$  a _ {n} , b _ {n} > 0 $
 +
constants, have a limit distribution different from a normal one (non-degenerate distributions, i.e. distributions not concentrated at a single point, are meant) (cf. [[Stable distribution|Stable distribution]]). In the work of Khinchin, B.V. Gnedenko, P. Lévy, W. Doeblin, and others, both the class of possible distributions for sums of independent random variables and conditions of convergence of the distributions of sums to some limit distribution (in triangular arrays (cf. [[Triangular array|Triangular array]]) of random variables one imposes here the condition of [[Asymptotic negligibility|asymptotic negligibility]] of the summands) have been completely studied. (Cf. [[Infinitely-divisible distribution|Infinitely-divisible distribution]]; [[Stochastic process with independent increments|Stochastic process with independent increments]].)
  
3) [[Local limit theorems|Local limit theorems]] have been given considerable attention. E.g., suppose that the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892085.png" /> take only integer values. Then the sums <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892086.png" /> also take integer values only, and it is natural to pose the question of the limiting behaviour of the probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892087.png" /> that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892088.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892089.png" /> is an integer. The simplest example of a local limit theorem is the local Laplace theorem. Another type of local limit theorem describes the limiting distribution of the densities of the distributions of sums.
+
3) [[Local limit theorems|Local limit theorems]] have been given considerable attention. E.g., suppose that the random variables $  X _ {n} $
 +
take only integer values. Then the sums $  s _ {n} $
 +
also take integer values only, and it is natural to pose the question of the limiting behaviour of the probabilities $  P _ {n} ( m) $
 +
that $  s _ {n} = m $,  
 +
where $  m $
 +
is an integer. The simplest example of a local limit theorem is the local Laplace theorem. Another type of local limit theorem describes the limiting distribution of the densities of the distributions of sums.
  
4) Limit theorems in their classical formulation describe the behaviour of individual sums <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892090.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892091.png" /> grows. Sufficiently general limit theorems for probabilities of events depending on several sums at once were first obtained by Kolmogorov (1931). His results imply, e.g., that under fairly general conditions the probability of the inequality
+
4) Limit theorems in their classical formulation describe the behaviour of individual sums $  s _ {n} $
 +
as $  n $
 +
grows. Sufficiently general limit theorems for probabilities of events depending on several sums at once were first obtained by Kolmogorov (1931). His results imply, e.g., that under fairly general conditions the probability of the inequality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892092.png" /></td> </tr></table>
+
$$
 +
\max _ {1 \leq  k \leq  n } \
 +
| s _ {k} |  < zB _ {n}  $$
  
 
has as limit the quantity
 
has as limit the quantity
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l058/l058920/l05892093.png" /></td> </tr></table>
+
$$
 +
{
 +
\frac{4} \pi
 +
}
 +
\sum _ {k = 0 } ^  \infty 
 +
(- 1)  ^ {k}
 +
{
 +
\frac{1}{2k + 1 }
 +
}
 +
e ^ {- ( 2k + 1)  ^ {2} z  ^ {2} /8 \pi  ^ {2} } ,\ \
 +
z > 0.
 +
$$
  
 
A most general means for proving analogous limit theorems is by limit transition from discrete to continuous processes.
 
A most general means for proving analogous limit theorems is by limit transition from discrete to continuous processes.
Line 135: Line 322:
 
|valign="top"|{{Ref|Lo}}||valign="top"|  M. Loève,  "Probability theory", '''I''', Springer  (1977)  {{MR|0651017}} {{MR|0651018}}  {{ZBL|0359.60001}}  
 
|valign="top"|{{Ref|Lo}}||valign="top"|  M. Loève,  "Probability theory", '''I''', Springer  (1977)  {{MR|0651017}} {{MR|0651018}}  {{ZBL|0359.60001}}  
 
|-
 
|-
|valign="top"|{{Ref|PaRa}}||valign="top"|  V. Paulaskas,  A. Račkauskas,  "Approximation theory in the central limit theorem. Exact results in Banach spaces", Kluwer  (1989)  (Translated from Russian)  
+
|valign="top"|{{Ref|PaRa}}||valign="top"|  V. Paulauskas,  A. Račkauskas,  "Approximation theory in the central limit theorem. Exact results in Banach spaces", Kluwer  (1989)  (Translated from Russian)   {{MR|1015294}}  {{ZBL|0715.60023}}
 
|-
 
|-
 
|valign="top"|{{Ref|Pe}}||valign="top"|  V.V. Petrov,  "Sums of independent random variables", Springer  (1975)  (Translated from Russian)  {{MR|0388499}}  {{ZBL|0322.60043}} {{ZBL|0322.60042}}  
 
|valign="top"|{{Ref|Pe}}||valign="top"|  V.V. Petrov,  "Sums of independent random variables", Springer  (1975)  (Translated from Russian)  {{MR|0388499}}  {{ZBL|0322.60043}} {{ZBL|0322.60042}}  

Latest revision as of 22:16, 5 June 2020


in probability theory

2020 Mathematics Subject Classification: Primary: 60Fxx [MSN][ZBL]

A general name for a number of theorems in probability theory that give conditions for the appearance of some regularity as the result of the action of a large number of random sources. The first limit theorems, established by J. Bernoulli (1713) and P. Laplace (1812), are related to the distribution of the deviation of the frequency $ \mu _ {n} /n $ of appearance of some event $ E $ in $ n $ independent trials from its probability $ p $, $ 0 < p < 1 $( exact statements can be found in the articles Bernoulli theorem; Laplace theorem). S. Poisson (1837) generalized these theorems to the case when the probability $ p _ {k} $ of appearance of $ E $ in the $ k $- th trial depends on $ k $, by writing down the limiting behaviour, as $ n \rightarrow \infty $, of the distribution of the deviation of $ \mu _ {n} /n $ from the arithmetic mean $ \overline{p}\; = ( \sum _ {k = 1 } ^ {n} p _ {k} )/n $ of the probabilities $ p _ {k} $, $ 1 \leq k \leq n $( cf. Poisson theorem). If $ X _ {k} $ denotes the random variable that takes the value 1 if $ E $ appears in the $ k $- th trial and the value 0 when the opposite event appears, then $ \mu _ {n} $ can be expressed as the sum

$$ \mu _ {n} = \ X _ {1} + \dots + X _ {n} , $$

which makes it possible to regard the theorems mentioned above as particular cases of two more general statements related to sums of independent random variables — the law of large numbers and the central limit theorem (these are given in their classical forms below).

The law of large numbers.

Let

$$ \tag{1 } X _ {1} , X _ {2} \dots $$

be a sequence of independent random variables, let $ s _ {n} $ be the sum of the first $ n $ elements of this sequence,

$$ \tag{2 } s _ {n} = \ X _ {1} + \dots + X _ {n} , $$

let $ A _ {n} $ and $ B _ {n} ^ {2} $ be, respectively, the mathematical expectation,

$$ A _ {n} = \ {\mathsf E} s _ {n} = \ {\mathsf E} X _ {1} + \dots + {\mathsf E} X _ {n} , $$

and variance (cf. Dispersion),

$$ B _ {n} ^ {2} = \ {\mathsf D} s _ {n} = \ {\mathsf D} X _ {1} + \dots + {\mathsf D} X _ {n} , $$

of the sum $ s _ {n} $. One says that the sequence (1) is subject to the law of large numbers if, for any $ \epsilon > 0 $, the probability of the inequality

$$ \left | { \frac{s _ {n} }{n} } - { \frac{A _ {n} }{n} } \ \right | > \epsilon $$

tends to zero as $ n \rightarrow \infty $.

Very general conditions for the law of large numbers to be applicable were found first by P.L. Chebyshev (1867) and were later generalized by A.A. Markov (1906). The problem of necessary and sufficient conditions for the law of large numbers to be applicable was exhaustively treated by A.N. Kolmogorov (1928). If all random variables have the same distribution function, then these conditions reduce to one: the $ X _ {n} $ must have finite mathematical expectation (this was shown by A.Ya. Khinchin in 1929).

The central limit theorem.

One says that the central limit theorem holds for a sequence (1) if for arbitrary $ z _ {1} $ and $ z _ {2} $ the probability of the inequality

$$ z _ {1} B _ {n} < \ s _ {n} - A _ {n} < \ z _ {2} B _ {n} $$

has as limit, as $ n \rightarrow \infty $, the quantity

$$ \Phi ( z _ {2} ) - \Phi ( z _ {1} ), $$

where

$$ \Phi ( z) = \ { \frac{1}{\sqrt {2 \pi } } } \int\limits _ {- \infty } ^ { z } e ^ {- x ^ {2} /2 } dx $$

(cf. Normal distribution). Rather general sufficient conditions for the central limit theorem to hold were indicated by Chebyshev (1887); however, his proofs contained gaps, which were filled in somewhat later by Markov (1898). A solution of the problem which is nearly final was obtained by A.M. Lyapunov (1901). The exact formulation of Lyapunov's theorem is: Suppose

$$ c _ {k} = {\mathsf E} | X _ {k} - {\mathsf E} X _ {k} | ^ {2 + \delta } ,\ \ \delta > 0, $$

$$ C _ {n} = c _ {1} + \dots + c _ {n} . $$

If the ratio $ L _ {n} = C _ {n} /B _ {n} ^ {2 + \delta } $ tends to zero as $ n \rightarrow \infty $, then the central limit theorem holds for (1). The final solution to the problem of conditions of applicability of the central limit theorem was obtained, in general outline, by S.N. Bernstein [S.N. Bernshtein] (1926) and was completed by W. Feller (1935). Under the conditions of the central limit theorem the relative accuracy of approximation of the probability of an inequality of the form $ S _ {n} - A _ {n} > z _ {n} B _ {n} $, where $ z _ {n} $ grows unboundedly as $ n $ tends to infinity, by $ 1 - \Phi _ {n} ( z _ {n} ) $ can be very low. Correction factors necessary in order to increase the accuracy are indicated in limit theorems for probabilities of large deviations (cf. Probability of large deviations; Cramér theorem). This question was studied, following H. Cramér and Feller, by Yu.V. Linnik and others. Typical results related to this branch of the subject are most conveniently explained using the example of the sums (2) of independent identically-distributed random variables $ X _ {1} , X _ {2} \dots $ with $ {\mathsf E} X _ {j} = 0 $ and $ {\mathsf D} X _ {j} = 1 $. In this case $ A _ {n} = 0 $, $ B _ {n} = \sqrt n $.

Consider, e.g., the probability of the inequality

$$ s _ {n} \geq \ z _ {n} \sqrt n , $$

which equals $ 1 - F _ {n} ( z _ {n} ) $, where $ F _ {n} ( z _ {n} ) $ is the distribution function of the variable $ s _ {n} / \sqrt n $, and for fixed $ z _ {n} = z $ as $ n \rightarrow \infty $,

$$ \tag{3 } 1 - F _ {n} ( z) \rightarrow \ 1 - \Phi ( z). $$

If $ z _ {n} $ depends on $ n $ and is moreover such that $ z _ {n} \rightarrow \infty $ as $ n \rightarrow \infty $, then

$$ 1 - F _ {n} ( z _ {n} ) \rightarrow 0 \ \ \textrm{ and } \ \ 1 - \Phi ( z _ {n} ) \rightarrow 0 $$

and formula (3) is useless. It is necessary to obtain bounds on the relative accuracy of approximation, i.e. for the ratio of $ 1 - F ( z _ {n} ) $ to $ 1 - \Phi ( z _ {n} ) $. In particular, there naturally arises the question of conditions under which

$$ \tag{4 } \frac{1 - F _ {n} ( z _ {n} ) }{1 - \Phi ( z _ {n} ) } \rightarrow 1 $$

as $ z _ {n} \rightarrow \infty $.

Relation (4) holds for $ z _ {n} $ of arbitrary growth only if the summands have a normal distribution (this conclusion is valid as soon as $ z _ {n} $ is of order greater than $ \sqrt n $). If the summands are not normal, then relation (4) can hold in certain zones, whose orders do not exceed $ \sqrt n $. "Smaller" zones (of logarithmic order) are obtained under the condition that a certain number of moments is finite. Moreover, under definite "regularity" conditions on the densities of the summands the "normality" asymptotics imply power asymptotics. E.g., if the density of the summands is

$$ { \frac{2} \pi } { \frac{1}{( 1 + z ^ {2} ) ^ {2} } } , $$

then, uniformly in $ z $, as $ n \rightarrow \infty $,

$$ {\mathsf P} \left \{ \frac{s _ {n} }{\sqrt n } \geq z \right \} \sim \ 1 - \Phi ( z) + { \frac{2}{3 \pi } } { \frac{1}{\sqrt n } } { \frac{1}{z ^ {3} } } . $$

Taking into account that as $ z \rightarrow \infty $,

$$ 1 - \Phi ( z) \sim \ { \frac{1}{\sqrt {2 \pi } z } } e ^ {- z ^ {2} /2 } , $$

it is easy to convince oneself that (4) holds. The extension to zones of power order (of the form $ n ^ \alpha $, $ \alpha < 1/2 $), requires that the condition

$$ \tag{5 } {\mathsf E} e ^ {| X _ {j} | \frac{4 \alpha }{2 \alpha + 1 } } < \infty $$

should hold, as well as that a certain number (depending on $ \alpha $) of moments of $ X _ {j} $ coincide with corresponding moments of the normal distribution. If the latter condition (on the coincidence of moments) is not satisfied, then the ratio at the left-hand side of (4) can be described by a Cramér series (under the so-called Cramér condition, cf. Cramér theorem) or initial partial sums of it (under a condition of the type (5)).

Estimates of probabilities of large deviations are used in mathematical statistics, statistical physics, etc.

The following may be distinguished among the other directions of research in the domain of limit theorems.

1) Research, initiated by Markov and continued by Bernstein and others, on conditions under which the law of large numbers and the central limit theorem hold for sums of dependent random variables.

2) Even in the case of sequences of identically-distributed random variables one can exhibit simple examples when "normalized" (i.e. subjected to a certain linear transformation) sums $ ( s _ {n} - a _ {n} ) / b _ {n} $, $ a _ {n} , b _ {n} > 0 $ constants, have a limit distribution different from a normal one (non-degenerate distributions, i.e. distributions not concentrated at a single point, are meant) (cf. Stable distribution). In the work of Khinchin, B.V. Gnedenko, P. Lévy, W. Doeblin, and others, both the class of possible distributions for sums of independent random variables and conditions of convergence of the distributions of sums to some limit distribution (in triangular arrays (cf. Triangular array) of random variables one imposes here the condition of asymptotic negligibility of the summands) have been completely studied. (Cf. Infinitely-divisible distribution; Stochastic process with independent increments.)

3) Local limit theorems have been given considerable attention. E.g., suppose that the random variables $ X _ {n} $ take only integer values. Then the sums $ s _ {n} $ also take integer values only, and it is natural to pose the question of the limiting behaviour of the probabilities $ P _ {n} ( m) $ that $ s _ {n} = m $, where $ m $ is an integer. The simplest example of a local limit theorem is the local Laplace theorem. Another type of local limit theorem describes the limiting distribution of the densities of the distributions of sums.

4) Limit theorems in their classical formulation describe the behaviour of individual sums $ s _ {n} $ as $ n $ grows. Sufficiently general limit theorems for probabilities of events depending on several sums at once were first obtained by Kolmogorov (1931). His results imply, e.g., that under fairly general conditions the probability of the inequality

$$ \max _ {1 \leq k \leq n } \ | s _ {k} | < zB _ {n} $$

has as limit the quantity

$$ { \frac{4} \pi } \sum _ {k = 0 } ^ \infty (- 1) ^ {k} { \frac{1}{2k + 1 } } e ^ {- ( 2k + 1) ^ {2} z ^ {2} /8 \pi ^ {2} } ,\ \ z > 0. $$

A most general means for proving analogous limit theorems is by limit transition from discrete to continuous processes.

5) The limit theorems given above are related to sums of random variables. An example of a limit theorem of different kind is given by limit theorems for order statistics. These theorems have been studied in detail by Gnedenko, N.V. Smirnov and others.

6) Finally, theorems establishing properties of sequences of random variables occurring with probability one are called strong limit theorems. (Cf. Strong law of large numbers; Law of the iterated logarithm.)

For methods of proof of limit theorems see Characteristic function; Distributions, convergence of.

Comments

The law of large numbers is usually called the weak law of large numbers. It is a particular example of a sequence of random variables converging in probability to 0 (cf. Convergence in probability). The central limit theorem is now an example of a very wide class of theorems about convergence in distribution of sequences of random variables or sequences of stochastic processes. Equivalently, these theorems deal with the weak convergence of the probability measures describing the distributions of the variables or processes under consideration (cf. Convergence of measures; Weak convergence of probability measures).

References

[Bi] P. Billingsley, "Convergence of probability measures", Wiley (1968) MR0233396 Zbl 0172.21201
[GnKo] B.V. Gnedenko, A.N. Kolmogorov, "Limit distributions for sums of independent random variables", Addison-Wesley (1954) (Translated from Russian) MR0062975 Zbl 0056.36001
[IbLi] I.A. Ibragimov, Yu.V. Linnik, "Independent and stationary sequences of random variables", Wolters-Noordhoff (1971) (Translated from Russian) MR0322926 Zbl 0219.60027
[JaSh] J. Jacod, A.N. Shiryaev, "Limit theorems for stochastic processes", Springer (1987) (Translated from Russian) MR0959133 Zbl 0635.60021
[Lo] M. Loève, "Probability theory", I, Springer (1977) MR0651017 MR0651018 Zbl 0359.60001
[PaRa] V. Paulauskas, A. Račkauskas, "Approximation theory in the central limit theorem. Exact results in Banach spaces", Kluwer (1989) (Translated from Russian) MR1015294 Zbl 0715.60023
[Pe] V.V. Petrov, "Sums of independent random variables", Springer (1975) (Translated from Russian) MR0388499 Zbl 0322.60043 Zbl 0322.60042
[Po] D. Pollard, "Convergence of stochastic processes", Springer (1984) MR0762984 Zbl 0544.60045
[PrRo] Yu.V. [Yu.V. Prokhorov] Prohorov, Yu.A. Rozanov, "Probability theory, basic concepts. Limit theorems, random processes", Springer (1969) (Translated from Russian) Zbl 0186.49701
How to Cite This Entry:
Limit theorems. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Limit_theorems&oldid=21595
This article was adapted from an original article by Yu.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article