Namespaces
Variants
Actions

Difference between revisions of "Poisson theorem"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (MR/ZBL numbers added)
m (tex encoded by computer)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 +
<!--
 +
p0733801.png
 +
$#A+1 = 41 n = 0
 +
$#C+1 = 41 : ~/encyclopedia/old_files/data/P073/P.0703380 Poisson theorem
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
{{MSC|60F05}}  
 
{{MSC|60F05}}  
  
 
[[Category:Limit theorems]]
 
[[Category:Limit theorems]]
  
Poisson's theorem is a limit theorem in probability theory which is a particular case of the [[Law of large numbers|law of large numbers]]. Poisson's theorem generalizes the [[Bernoulli theorem|Bernoulli theorem]] to the case of independent trials in which the probability of appearance of a certain event depends on the trial number (the so-called Poisson scheme). Poisson's theorem states that: If in a sequence of independent trials an event <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p0733801.png" /> occurs with probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p0733802.png" /> at the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p0733803.png" />-th trial, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p0733804.png" /> and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p0733805.png" /> is the frequency of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p0733806.png" /> in the first <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p0733807.png" /> trials, then for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p0733808.png" /> the probability of the inequality
+
Poisson's theorem is a limit theorem in probability theory which is a particular case of the [[Law of large numbers|law of large numbers]]. Poisson's theorem generalizes the [[Bernoulli theorem|Bernoulli theorem]] to the case of independent trials in which the probability of appearance of a certain event depends on the trial number (the so-called Poisson scheme). Poisson's theorem states that: If in a sequence of independent trials an event $  A $
 +
occurs with probability p _ {k} $
 +
at the $  k $-
 +
th trial, $  k = 1 , 2 \dots $
 +
and if $  \mu _ {n} / n $
 +
is the frequency of $  A $
 +
in the first $  n $
 +
trials, then for any $  \epsilon > 0 $
 +
the probability of the inequality
 +
 
 +
$$
 +
\left |
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p0733809.png" /></td> </tr></table>
+
\frac{\mu _ {n} }{n}
 +
-
  
will tend to 1 when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338010.png" />. Bernoulli's theorem follows from Poisson's theorem when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338011.png" />. The theorem was established by S. Poisson . The proof of Poisson's theorem was obtained by Poisson from a variant of the [[Laplace theorem|Laplace theorem]]. A simple proof of Poisson's theorem was given by P.L. Chebyshev (1846), who also stated the first general form of the law of large numbers, which includes Poisson's theorem as a particular case.
+
\frac{p _ {1} + \dots + p _ {n} }{n}
  
Poisson's theorem is a limit theorem in probability theory about the convergence of the [[Binomial distribution|binomial distribution]] to the [[Poisson distribution|Poisson distribution]]: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338012.png" /> is the probability that in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338013.png" /> Bernoulli trials a certain event <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338014.png" /> occurs exactly <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338015.png" /> times, where the probability of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338016.png" /> in every trial is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338017.png" />, then for large values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338018.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338019.png" /> the probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338020.png" /> is approximately
+
\right | < \epsilon
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338021.png" /></td> </tr></table>
+
will tend to 1 when  $  n \rightarrow \infty $.
 +
Bernoulli's theorem follows from Poisson's theorem when  $  p _ {1} = \dots = p _ {n} $.
 +
The theorem was established by S. Poisson . The proof of Poisson's theorem was obtained by Poisson from a variant of the [[Laplace theorem|Laplace theorem]]. A simple proof of Poisson's theorem was given by P.L. Chebyshev (1846), who also stated the first general form of the law of large numbers, which includes Poisson's theorem as a particular case.
  
The number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338022.png" /> is the mean number of occurrences of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338023.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338024.png" /> trials, and the sequence of values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338025.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338026.png" /> <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338027.png" />, forms a Poisson distribution. Poisson's theorem was established by S.D. Poisson [[#References|[1]]] for a scheme of trials which is more general than the Bernoulli scheme, when the probability of occurrence of the event <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338028.png" /> can vary from trial to trial so that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338029.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338030.png" />. A strict proof of Poisson's theorem in this case is based on considering a triangular array of random variables so that in the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338031.png" />-th row the random variables are independent and take the values 1 and 0 with probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338032.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338033.png" />, respectively. A more convenient form of Poisson's theorem is as an inequality: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338034.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338035.png" />, then when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338036.png" />,
+
Poisson's theorem is a limit theorem in probability theory about the convergence of the [[Binomial distribution|binomial distribution]] to the [[Poisson distribution|Poisson distribution]]: If  $  P _ {n} ( m) $
 +
is the probability that in  $  n $
 +
Bernoulli trials a certain event  $  A $
 +
occurs exactly  $  m $
 +
times, where the probability of A $
 +
in every trial is p $,
 +
then for large values  $  n $
 +
and 1 / p $
 +
the probability  $  P _ {n} ( m) $
 +
is approximately
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338037.png" /></td> </tr></table>
+
$$
 +
e ^ {- n p }
 +
\frac{( n p )  ^ {m} }{m!}
 +
.
 +
$$
  
This inequality gives the error when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338038.png" /> is replaced by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338039.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338040.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p073/p073380/p07338041.png" />. Poisson's theorem and Laplace's theorem give a complete description of the asymptotic behaviour of the binomial distribution. Subsequent generalizations of Poisson's theorem were made in two basic directions. On the one hand, further refinements of Poisson's theorem based on asymptotic expansions have emerged, and on the other hand general conditions have been established under which sums of independent random variables converge to a Poisson distribution.
+
The number  $  \lambda = n p $
 +
is the mean number of occurrences of  $  A $
 +
in  $  n $
 +
trials, and the sequence of values  $  e ^ {- \lambda } \lambda  ^ {m} / m ! $,
 +
$  m = 0 , 1 \dots $
 +
$  \lambda > 0 $,  
 +
forms a Poisson distribution. Poisson's theorem was established by S.D. Poisson {{Cite|P}} for a scheme of trials which is more general than the Bernoulli scheme, when the probability of occurrence of the event  $  A $
 +
can vary from trial to trial so that  $  p _ {n} \rightarrow 0 $
 +
when  $  n \rightarrow \infty $.  
 +
A strict proof of Poisson's theorem in this case is based on considering a triangular array of random variables so that in the  $  n $-
 +
th row the random variables are independent and take the values 1 and 0 with probability  $  p _ {n} $
 +
and  $  1 - p _ {n} $,  
 +
respectively. A more convenient form of Poisson's theorem is as an inequality: If  $  \lambda = p _ {1} + \dots + p _ {n} $,
 +
$  \delta = p _ {1}  ^ {2} + \dots + p _ {n}  ^ {2} $,
 +
then when  $  n \geq  2 $,
  
====References====
+
$$
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> S.D. Poisson, "Récherches sur la probabilité des jugements en matière criminelle et en matière civile" , Paris (1837)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> M. Loève, "Probability theory" , Springer (1977) {{MR|0651017}} {{MR|0651018}} {{ZBL|0359.60001}} </TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> A.A. Borovkov, "Wahrscheinlichkeitstheorie" , Birkhäuser (1976) (Translated from Russian) {{MR|0410818}} {{ZBL|}} </TD></TR></table>
+
\left | P _ {n} ( m) - e ^ {- \lambda }  
 +
\frac{\lambda  ^ {m} }{m!}
 +
\right | \leq  2 \delta .
 +
$$
  
 +
This inequality gives the error when  $  P _ {n} ( m) $
 +
is replaced by  $  e ^ {- \lambda } \lambda  ^ {m} / m ! $.
 +
If  $  p _ {1} = \dots = p _ {n} = \lambda / n $,
 +
then  $  \delta = \lambda  ^ {2} / n $.
 +
Poisson's theorem and Laplace's theorem give a complete description of the asymptotic behaviour of the binomial distribution. Subsequent generalizations of Poisson's theorem were made in two basic directions. On the one hand, further refinements of Poisson's theorem based on asymptotic expansions have emerged, and on the other hand general conditions have been established under which sums of independent random variables converge to a Poisson distribution.
  
 +
====References====
 +
{|
 +
|valign="top"|{{Ref|P}}|| S.D. Poisson, "Récherches sur la probabilité des jugements en matière criminelle et en matière civile" , Paris (1837)
 +
|-
 +
|valign="top"|{{Ref|L}}|| M. Loève, "Probability theory" , Springer (1977) {{MR|0651017}} {{MR|0651018}} {{ZBL|0359.60001}}
 +
|-
 +
|valign="top"|{{Ref|B}}|| A.A. Borovkov, "Wahrscheinlichkeitstheorie" , Birkhäuser (1976) (Translated from Russian) {{MR|0410818}} {{ZBL|}}
 +
|}
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> V.K. Rohatje, "Probability theory" , Wiley (1979)</TD></TR></table>
+
{|
 +
|valign="top"|{{Ref|R}}|| V.K. Rohatje, "Probability theory" , Wiley (1979)
 +
|}

Latest revision as of 08:06, 6 June 2020


2020 Mathematics Subject Classification: Primary: 60F05 [MSN][ZBL]

Poisson's theorem is a limit theorem in probability theory which is a particular case of the law of large numbers. Poisson's theorem generalizes the Bernoulli theorem to the case of independent trials in which the probability of appearance of a certain event depends on the trial number (the so-called Poisson scheme). Poisson's theorem states that: If in a sequence of independent trials an event $ A $ occurs with probability $ p _ {k} $ at the $ k $- th trial, $ k = 1 , 2 \dots $ and if $ \mu _ {n} / n $ is the frequency of $ A $ in the first $ n $ trials, then for any $ \epsilon > 0 $ the probability of the inequality

$$ \left | \frac{\mu _ {n} }{n} - \frac{p _ {1} + \dots + p _ {n} }{n} \right | < \epsilon $$

will tend to 1 when $ n \rightarrow \infty $. Bernoulli's theorem follows from Poisson's theorem when $ p _ {1} = \dots = p _ {n} $. The theorem was established by S. Poisson . The proof of Poisson's theorem was obtained by Poisson from a variant of the Laplace theorem. A simple proof of Poisson's theorem was given by P.L. Chebyshev (1846), who also stated the first general form of the law of large numbers, which includes Poisson's theorem as a particular case.

Poisson's theorem is a limit theorem in probability theory about the convergence of the binomial distribution to the Poisson distribution: If $ P _ {n} ( m) $ is the probability that in $ n $ Bernoulli trials a certain event $ A $ occurs exactly $ m $ times, where the probability of $ A $ in every trial is $ p $, then for large values $ n $ and $ 1 / p $ the probability $ P _ {n} ( m) $ is approximately

$$ e ^ {- n p } \frac{( n p ) ^ {m} }{m!} . $$

The number $ \lambda = n p $ is the mean number of occurrences of $ A $ in $ n $ trials, and the sequence of values $ e ^ {- \lambda } \lambda ^ {m} / m ! $, $ m = 0 , 1 \dots $ $ \lambda > 0 $, forms a Poisson distribution. Poisson's theorem was established by S.D. Poisson [P] for a scheme of trials which is more general than the Bernoulli scheme, when the probability of occurrence of the event $ A $ can vary from trial to trial so that $ p _ {n} \rightarrow 0 $ when $ n \rightarrow \infty $. A strict proof of Poisson's theorem in this case is based on considering a triangular array of random variables so that in the $ n $- th row the random variables are independent and take the values 1 and 0 with probability $ p _ {n} $ and $ 1 - p _ {n} $, respectively. A more convenient form of Poisson's theorem is as an inequality: If $ \lambda = p _ {1} + \dots + p _ {n} $, $ \delta = p _ {1} ^ {2} + \dots + p _ {n} ^ {2} $, then when $ n \geq 2 $,

$$ \left | P _ {n} ( m) - e ^ {- \lambda } \frac{\lambda ^ {m} }{m!} \right | \leq 2 \delta . $$

This inequality gives the error when $ P _ {n} ( m) $ is replaced by $ e ^ {- \lambda } \lambda ^ {m} / m ! $. If $ p _ {1} = \dots = p _ {n} = \lambda / n $, then $ \delta = \lambda ^ {2} / n $. Poisson's theorem and Laplace's theorem give a complete description of the asymptotic behaviour of the binomial distribution. Subsequent generalizations of Poisson's theorem were made in two basic directions. On the one hand, further refinements of Poisson's theorem based on asymptotic expansions have emerged, and on the other hand general conditions have been established under which sums of independent random variables converge to a Poisson distribution.

References

[P] S.D. Poisson, "Récherches sur la probabilité des jugements en matière criminelle et en matière civile" , Paris (1837)
[L] M. Loève, "Probability theory" , Springer (1977) MR0651017 MR0651018 Zbl 0359.60001
[B] A.A. Borovkov, "Wahrscheinlichkeitstheorie" , Birkhäuser (1976) (Translated from Russian) MR0410818

Comments

References

[R] V.K. Rohatje, "Probability theory" , Wiley (1979)
How to Cite This Entry:
Poisson theorem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Poisson_theorem&oldid=23572
This article was adapted from an original article by A.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article