Namespaces
Variants
Actions

Difference between revisions of "Neyman method of confidence intervals"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (Undo revision 47969 by Ulf Rehmann (talk))
Tag: Undo
m (tex encoded by computer)
Line 1: Line 1:
One of the methods of [[Confidence estimation|confidence estimation]], which makes it possible to obtain interval estimators (cf. [[Interval estimator|Interval estimator]]) for unknown parameters of probability laws from results of observations. It was proposed and developed by J. Neyman (see [[#References|[1]]], [[#References|[2]]]). The essence of the method consists in the following. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n0665901.png" /> be random variables whose joint distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n0665902.png" /> depends on a parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n0665903.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n0665904.png" />. Suppose, next, that as point estimator of the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n0665905.png" /> a statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n0665906.png" /> is used with distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n0665907.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n0665908.png" />. Then for any number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n0665909.png" /> in the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659010.png" /> one can define a system of two equations in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659011.png" />:
+
<!--
 +
n0665901.png
 +
$#A+1 = 71 n = 0
 +
$#C+1 = 71 : ~/encyclopedia/old_files/data/N066/N.0606590 Neyman method of confidence intervals
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659012.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
Under certain regularity conditions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659013.png" />, which in almost-all cases of practical interest are satisfied, the system (*) has a unique solution
+
One of the methods of [[Confidence estimation|confidence estimation]], which makes it possible to obtain interval estimators (cf. [[Interval estimator|Interval estimator]]) for unknown parameters of probability laws from results of observations. It was proposed and developed by J. Neyman (see [[#References|[1]]], [[#References|[2]]]). The essence of the method consists in the following. Let  $  X _ {1} \dots X _ {n} $
 +
be random variables whose joint distribution function  $  F ( x , \theta ) $
 +
depends on a parameter  $  \theta \in \Theta \subset  \mathbf R  ^ {1} $,
 +
$  x = ( x _ {1} \dots x _ {n} ) \in \mathbf R  ^ {n} $.  
 +
Suppose, next, that as point estimator of the parameter  $  \theta $
 +
a statistic  $  T = T ( X _ {1} \dots X _ {n} ) $
 +
is used with distribution function  $  G ( t , \theta ) $,
 +
$  \theta \in \Theta $.
 +
Then for any number  $  P $
 +
in the interval  $  0.5 < P < 1 $
 +
one can define a system of two equations in  $  \theta $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659014.png" /></td> </tr></table>
+
$$ \tag{* }
 +
G ( T , \theta )  = \
 +
\left \{
 +
 
 +
\begin{array}{l}
 +
P ,  \\
 +
1 - P .  \\
 +
\end{array}
 +
 
 +
\right .$$
 +
 
 +
Under certain regularity conditions on  $  F ( x , \theta ) $,
 +
which in almost-all cases of practical interest are satisfied, the system (*) has a unique solution
 +
 
 +
$$
 +
\underline \theta  = \
 +
\underline \theta  ( T ) ,\ \
 +
\overline \theta \; = \
 +
\overline \theta \; ( T ) ,\ \
 +
\overline \theta \; , \underline \theta  \in \Theta ,
 +
$$
  
 
such that
 
such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659015.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ \underline \theta  < \theta < \overline \theta \; \mid  \theta \}
 +
\geq  2 P - 1 .
 +
$$
  
The set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659016.png" /> is called the confidence interval (confidence estimator) for the unknown parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659017.png" /> with confidence probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659018.png" />. The statistics <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659019.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659020.png" /> are called the lower and upper confidence bounds corresponding to the chosen confidence coefficient <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659021.png" />. In turn, the number
+
The set $  ( \underline \theta  , \overline \theta \; ) \subset  \Theta $
 +
is called the confidence interval (confidence estimator) for the unknown parameter $  \theta $
 +
with confidence probability $  2P - 1 $.  
 +
The statistics $  \underline \theta  $
 +
and $  \overline \theta \; $
 +
are called the lower and upper confidence bounds corresponding to the chosen confidence coefficient $  P $.  
 +
In turn, the number
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659022.png" /></td> </tr></table>
+
$$
 +
= \inf _ {\theta \in \Theta }  {\mathsf P}
 +
\{ \underline \theta  < \theta < \overline \theta \; \mid  \theta \}
 +
$$
  
is called the confidence coefficient of the confidence interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659023.png" />. Thus, Neyman's method of confidence intervals leads to interval estimators with confidence coefficient <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659024.png" />.
+
is called the confidence coefficient of the confidence interval $  ( \underline \theta  , \overline \theta \; ) $.  
 +
Thus, Neyman's method of confidence intervals leads to interval estimators with confidence coefficient $  p \geq  2P - 1 $.
  
Example 1. Suppose that independent random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659025.png" /> are subject to one and the same normal law <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659026.png" /> whose mathematical expectation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659027.png" /> is not known (cf. [[Normal distribution|Normal distribution]]). Then the best estimator for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659028.png" /> is the [[Sufficient statistic|sufficient statistic]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659029.png" />, which is distributed according to the normal law <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659030.png" />. Fixing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659031.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659032.png" /> and solving the equations
+
Example 1. Suppose that independent random variables $  X _ {1} \dots X _ {n} $
 +
are subject to one and the same normal law $  \Phi ( x - \theta ) $
 +
whose mathematical expectation $  \theta $
 +
is not known (cf. [[Normal distribution|Normal distribution]]). Then the best estimator for $  \theta $
 +
is the [[Sufficient statistic|sufficient statistic]] $  \overline{X}\; = \sum _ {i=} 1  ^ {n} X _ {i} / n $,  
 +
which is distributed according to the normal law $  \Phi [ \sqrt n ( x - \theta ) ] $.  
 +
Fixing $  P $
 +
in  $  0.5 < P < 1 $
 +
and solving the equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659033.png" /></td> </tr></table>
+
$$
 +
\Phi [ \sqrt n ( \overline{X}\; - \theta ) ]  = P ,\ \
 +
\Phi [ \sqrt n ( \overline{X}\; - \theta ) ]  = 1 - P ,
 +
$$
  
 
one finds the lower and upper confidence bounds
 
one finds the lower and upper confidence bounds
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659034.png" /></td> </tr></table>
+
$$
 +
\underline \theta  = \overline{X}\; -
 +
\frac{1}{\sqrt n }
 +
\Phi  ^ {-} 1 ( P) ,\ \
 +
\overline \theta \; = \overline{X}\; -  
 +
\frac{1}{\sqrt n }
 +
\Phi  ^ {-} 1 ( 1- P )
 +
$$
  
corresponding to the chosen confidence coefficient <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659035.png" />. Since
+
corresponding to the chosen confidence coefficient $  P $.  
 +
Since
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659036.png" /></td> </tr></table>
+
$$
 +
\Phi  ^ {-} 1 ( y) + \Phi  ^ {-} 1 ( 1- y )  \equiv  0 ,\ \
 +
y \in [ 0 , 1 ] ,
 +
$$
  
the confidence interval for the unknown mathematical expectation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659037.png" /> of the normal law <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659038.png" /> has the form
+
the confidence interval for the unknown mathematical expectation $  \theta $
 +
of the normal law $  \Phi ( x - \theta ) $
 +
has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659039.png" /></td> </tr></table>
+
$$
 +
\left ( \overline{X}\; -  
 +
\frac{1}{\sqrt n }
 +
\Phi  ^ {-} 1 ( P),\
 +
\overline{X}\; +
 +
\frac{1}{\sqrt n }
 +
\Phi  ^ {-} 1 ( P) \right ) ,
 +
$$
  
and its confidence coefficient is precisely <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659040.png" />.
+
and its confidence coefficient is precisely $  2P - 1 $.
  
Example 2. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659041.png" /> be a random variable subject to the binomial law with parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659042.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659043.png" /> (cf. [[Binomial distribution|Binomial distribution]]), that is, for any integer <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659044.png" />,
+
Example 2. Let $  \mu $
 +
be a random variable subject to the binomial law with parameters n $
 +
and $  \theta $(
 +
cf. [[Binomial distribution|Binomial distribution]]), that is, for any integer $  m = 0 \dots n $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659045.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ \mu \leq  m \mid  n , \theta \}  = \
 +
\sum _ { k= } 0 ^ { m }
 +
\left ( \begin{array}{c}
 +
n \\
 +
k
 +
\end{array}
 +
\right )
 +
\theta  ^ {k} ( 1- \theta )  ^ {n-} k =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659046.png" /></td> </tr></table>
+
$$
 +
= \
 +
I _ {1 - \theta }  ( n- m , m+ 1 ) ,\  0 < \theta < 1 ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659047.png" /></td> </tr></table>
+
$$
 +
I _ {x} ( a , b )  = \
 +
 
 +
\frac{1}{B ( a , b ) }
 +
 
 +
\int\limits _ { 0 } ^ { x }
 +
t  ^ {a-} 1 ( 1- t )  ^ {b-} 1  dt
 +
$$
 +
 
 +
is the [[Incomplete beta-function|incomplete beta-function]] ( $  0 \leq  x \leq  1 $,
 +
$  a > 0 $,
 +
$  b > 0 $).
 +
If the  "success" parameter  $  \theta $
 +
is not known, then to determine the confidence bounds one has to solve, in accordance with Neyman's method of confidence intervals, the equations
 +
 
 +
$$
 +
I _ {1 - \theta }  ( n - \mu , \mu + 1 )  = \
 +
\left \{
  
is the [[Incomplete beta-function|incomplete beta-function]] (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659048.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659049.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659050.png" />). If the  "success" parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659051.png" /> is not known, then to determine the confidence bounds one has to solve, in accordance with Neyman's method of confidence intervals, the equations
+
\begin{array}{l}
 +
P ,  \\
 +
1 - P \\
 +
\end{array}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659052.png" /></td> </tr></table>
+
\right .$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659053.png" />. From tables of mathematical statistics the roots <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659054.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659055.png" /> of these equations are determined, which are the upper and lower confidence bounds, respectively, with confidence coefficient <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659056.png" />. The coefficient of the resulting confidence interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659057.png" /> is precisely <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659058.png" />. Obviously, if an experiment gives <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659059.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659060.png" />, and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659061.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659062.png" />.
+
where 0.5 < P < 1 $.  
 +
From tables of mathematical statistics the roots $  \overline \theta \; $
 +
and $  \underline \theta  $
 +
of these equations are determined, which are the upper and lower confidence bounds, respectively, with confidence coefficient $  P $.  
 +
The coefficient of the resulting confidence interval $  ( \underline \theta  , \overline \theta \; ) $
 +
is precisely $  2P - 1 $.  
 +
Obviously, if an experiment gives $  \mu = 0 $,  
 +
then $  \underline \theta  = 0 $,  
 +
and if $  \mu = n $,  
 +
then $  \overline \theta \; = 1 $.
  
Neyman's method of confidence intervals differs substantially from the Bayesian method (cf. [[Bayesian approach|Bayesian approach]]) and the method based on Fisher's fiducial approach (cf. [[Fiducial distribution|Fiducial distribution]]). In it the unknown parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659063.png" /> of the distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659064.png" /> is treated as a constant quantity, and the confidence interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659065.png" /> is constructed from an experiment in the course of which the value of the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659066.png" /> is calculated. Consequently, according to Neyman's method of confidence intervals, the probability for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659067.png" /> to hold is the [[A priori probability|a priori probability]] for the fact that the confidence interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659068.png" /> "covers"  the unknown true value of the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659069.png" />. In fact, Neyman's confidence method remains valid if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659070.png" /> is a random variable, because in the method the interval estimator is constructed from carrying out an experiment and consequently does not depend on the [[A priori distribution|a priori distribution]] of the parameter. Neyman's method differs advantageously from the Bayesian and the fiducial approach by being independent of a priori information about the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/n/n066/n066590/n06659071.png" /> and so, in contrast to Fisher's method, is logically sound. In general, Neyman's method leads to a whole system of confidence intervals for the unknown parameter, and in this context arises the problem of constructing an optimal interval estimator having, for example, the properties of being unbiased, accurate or similar, which can be solved within the framework of the theory of statistical hypothesis testing.
+
Neyman's method of confidence intervals differs substantially from the Bayesian method (cf. [[Bayesian approach|Bayesian approach]]) and the method based on Fisher's fiducial approach (cf. [[Fiducial distribution|Fiducial distribution]]). In it the unknown parameter $  \theta $
 +
of the distribution function $  F ( x , \theta ) $
 +
is treated as a constant quantity, and the confidence interval $  ( \underline \theta  ( T) , \overline \theta \; ( T) ) $
 +
is constructed from an experiment in the course of which the value of the statistic $  T $
 +
is calculated. Consequently, according to Neyman's method of confidence intervals, the probability for $  \underline \theta  < \theta < \overline \theta \; $
 +
to hold is the [[A priori probability|a priori probability]] for the fact that the confidence interval  $  ( \underline \theta  , \overline \theta \; ) $"
 +
covers"  the unknown true value of the parameter $  \theta $.  
 +
In fact, Neyman's confidence method remains valid if $  \theta $
 +
is a random variable, because in the method the interval estimator is constructed from carrying out an experiment and consequently does not depend on the [[A priori distribution|a priori distribution]] of the parameter. Neyman's method differs advantageously from the Bayesian and the fiducial approach by being independent of a priori information about the parameter $  \theta $
 +
and so, in contrast to Fisher's method, is logically sound. In general, Neyman's method leads to a whole system of confidence intervals for the unknown parameter, and in this context arises the problem of constructing an optimal interval estimator having, for example, the properties of being unbiased, accurate or similar, which can be solved within the framework of the theory of statistical hypothesis testing.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J. Neyman,  "On the problem of confidence intervals"  ''Ann. Math. Stat.'' , '''6'''  (1935)  pp. 111–116</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J. Neyman,  "Outline of a theory of statistical estimation based on the classical theory of probability"  ''Philos. Trans. Roy. Soc. London. Ser. A.'' , '''236'''  (1937)  pp. 333–380</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  L.N. Bol'shev,  N.V. Smirnov,  "Tables of mathematical statistics" , ''Libr. math. tables'' , '''46''' , Nauka  (1983)  (In Russian)  (Processed by L.S. Bark and E.S. Kedrova)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  L.N. Bol'shev,  "On the construction of confidence limits"  ''Theor. Probab. Appl.'' , '''10'''  (1965)  pp. 173–177  ''Teor. Veroyatnost. i Primenen.'' , '''10''' :  1  (1965)  pp. 187–192</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  E.L. Lehmann,  "Testing statistical hypotheses" , Wiley  (1986)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J. Neyman,  "On the problem of confidence intervals"  ''Ann. Math. Stat.'' , '''6'''  (1935)  pp. 111–116</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J. Neyman,  "Outline of a theory of statistical estimation based on the classical theory of probability"  ''Philos. Trans. Roy. Soc. London. Ser. A.'' , '''236'''  (1937)  pp. 333–380</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  L.N. Bol'shev,  N.V. Smirnov,  "Tables of mathematical statistics" , ''Libr. math. tables'' , '''46''' , Nauka  (1983)  (In Russian)  (Processed by L.S. Bark and E.S. Kedrova)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  L.N. Bol'shev,  "On the construction of confidence limits"  ''Theor. Probab. Appl.'' , '''10'''  (1965)  pp. 173–177  ''Teor. Veroyatnost. i Primenen.'' , '''10''' :  1  (1965)  pp. 187–192</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  E.L. Lehmann,  "Testing statistical hypotheses" , Wiley  (1986)</TD></TR></table>

Revision as of 14:54, 7 June 2020


One of the methods of confidence estimation, which makes it possible to obtain interval estimators (cf. Interval estimator) for unknown parameters of probability laws from results of observations. It was proposed and developed by J. Neyman (see [1], [2]). The essence of the method consists in the following. Let $ X _ {1} \dots X _ {n} $ be random variables whose joint distribution function $ F ( x , \theta ) $ depends on a parameter $ \theta \in \Theta \subset \mathbf R ^ {1} $, $ x = ( x _ {1} \dots x _ {n} ) \in \mathbf R ^ {n} $. Suppose, next, that as point estimator of the parameter $ \theta $ a statistic $ T = T ( X _ {1} \dots X _ {n} ) $ is used with distribution function $ G ( t , \theta ) $, $ \theta \in \Theta $. Then for any number $ P $ in the interval $ 0.5 < P < 1 $ one can define a system of two equations in $ \theta $:

$$ \tag{* } G ( T , \theta ) = \ \left \{ \begin{array}{l} P , \\ 1 - P . \\ \end{array} \right .$$

Under certain regularity conditions on $ F ( x , \theta ) $, which in almost-all cases of practical interest are satisfied, the system (*) has a unique solution

$$ \underline \theta = \ \underline \theta ( T ) ,\ \ \overline \theta \; = \ \overline \theta \; ( T ) ,\ \ \overline \theta \; , \underline \theta \in \Theta , $$

such that

$$ {\mathsf P} \{ \underline \theta < \theta < \overline \theta \; \mid \theta \} \geq 2 P - 1 . $$

The set $ ( \underline \theta , \overline \theta \; ) \subset \Theta $ is called the confidence interval (confidence estimator) for the unknown parameter $ \theta $ with confidence probability $ 2P - 1 $. The statistics $ \underline \theta $ and $ \overline \theta \; $ are called the lower and upper confidence bounds corresponding to the chosen confidence coefficient $ P $. In turn, the number

$$ p = \inf _ {\theta \in \Theta } {\mathsf P} \{ \underline \theta < \theta < \overline \theta \; \mid \theta \} $$

is called the confidence coefficient of the confidence interval $ ( \underline \theta , \overline \theta \; ) $. Thus, Neyman's method of confidence intervals leads to interval estimators with confidence coefficient $ p \geq 2P - 1 $.

Example 1. Suppose that independent random variables $ X _ {1} \dots X _ {n} $ are subject to one and the same normal law $ \Phi ( x - \theta ) $ whose mathematical expectation $ \theta $ is not known (cf. Normal distribution). Then the best estimator for $ \theta $ is the sufficient statistic $ \overline{X}\; = \sum _ {i=} 1 ^ {n} X _ {i} / n $, which is distributed according to the normal law $ \Phi [ \sqrt n ( x - \theta ) ] $. Fixing $ P $ in $ 0.5 < P < 1 $ and solving the equations

$$ \Phi [ \sqrt n ( \overline{X}\; - \theta ) ] = P ,\ \ \Phi [ \sqrt n ( \overline{X}\; - \theta ) ] = 1 - P , $$

one finds the lower and upper confidence bounds

$$ \underline \theta = \overline{X}\; - \frac{1}{\sqrt n } \Phi ^ {-} 1 ( P) ,\ \ \overline \theta \; = \overline{X}\; - \frac{1}{\sqrt n } \Phi ^ {-} 1 ( 1- P ) $$

corresponding to the chosen confidence coefficient $ P $. Since

$$ \Phi ^ {-} 1 ( y) + \Phi ^ {-} 1 ( 1- y ) \equiv 0 ,\ \ y \in [ 0 , 1 ] , $$

the confidence interval for the unknown mathematical expectation $ \theta $ of the normal law $ \Phi ( x - \theta ) $ has the form

$$ \left ( \overline{X}\; - \frac{1}{\sqrt n } \Phi ^ {-} 1 ( P),\ \overline{X}\; + \frac{1}{\sqrt n } \Phi ^ {-} 1 ( P) \right ) , $$

and its confidence coefficient is precisely $ 2P - 1 $.

Example 2. Let $ \mu $ be a random variable subject to the binomial law with parameters $ n $ and $ \theta $( cf. Binomial distribution), that is, for any integer $ m = 0 \dots n $,

$$ {\mathsf P} \{ \mu \leq m \mid n , \theta \} = \ \sum _ { k= } 0 ^ { m } \left ( \begin{array}{c} n \\ k \end{array} \right ) \theta ^ {k} ( 1- \theta ) ^ {n-} k = $$

$$ = \ I _ {1 - \theta } ( n- m , m+ 1 ) ,\ 0 < \theta < 1 , $$

where

$$ I _ {x} ( a , b ) = \ \frac{1}{B ( a , b ) } \int\limits _ { 0 } ^ { x } t ^ {a-} 1 ( 1- t ) ^ {b-} 1 dt $$

is the incomplete beta-function ( $ 0 \leq x \leq 1 $, $ a > 0 $, $ b > 0 $). If the "success" parameter $ \theta $ is not known, then to determine the confidence bounds one has to solve, in accordance with Neyman's method of confidence intervals, the equations

$$ I _ {1 - \theta } ( n - \mu , \mu + 1 ) = \ \left \{ \begin{array}{l} P , \\ 1 - P , \\ \end{array} \right .$$

where $ 0.5 < P < 1 $. From tables of mathematical statistics the roots $ \overline \theta \; $ and $ \underline \theta $ of these equations are determined, which are the upper and lower confidence bounds, respectively, with confidence coefficient $ P $. The coefficient of the resulting confidence interval $ ( \underline \theta , \overline \theta \; ) $ is precisely $ 2P - 1 $. Obviously, if an experiment gives $ \mu = 0 $, then $ \underline \theta = 0 $, and if $ \mu = n $, then $ \overline \theta \; = 1 $.

Neyman's method of confidence intervals differs substantially from the Bayesian method (cf. Bayesian approach) and the method based on Fisher's fiducial approach (cf. Fiducial distribution). In it the unknown parameter $ \theta $ of the distribution function $ F ( x , \theta ) $ is treated as a constant quantity, and the confidence interval $ ( \underline \theta ( T) , \overline \theta \; ( T) ) $ is constructed from an experiment in the course of which the value of the statistic $ T $ is calculated. Consequently, according to Neyman's method of confidence intervals, the probability for $ \underline \theta < \theta < \overline \theta \; $ to hold is the a priori probability for the fact that the confidence interval $ ( \underline \theta , \overline \theta \; ) $" covers" the unknown true value of the parameter $ \theta $. In fact, Neyman's confidence method remains valid if $ \theta $ is a random variable, because in the method the interval estimator is constructed from carrying out an experiment and consequently does not depend on the a priori distribution of the parameter. Neyman's method differs advantageously from the Bayesian and the fiducial approach by being independent of a priori information about the parameter $ \theta $ and so, in contrast to Fisher's method, is logically sound. In general, Neyman's method leads to a whole system of confidence intervals for the unknown parameter, and in this context arises the problem of constructing an optimal interval estimator having, for example, the properties of being unbiased, accurate or similar, which can be solved within the framework of the theory of statistical hypothesis testing.

References

[1] J. Neyman, "On the problem of confidence intervals" Ann. Math. Stat. , 6 (1935) pp. 111–116
[2] J. Neyman, "Outline of a theory of statistical estimation based on the classical theory of probability" Philos. Trans. Roy. Soc. London. Ser. A. , 236 (1937) pp. 333–380
[3] L.N. Bol'shev, N.V. Smirnov, "Tables of mathematical statistics" , Libr. math. tables , 46 , Nauka (1983) (In Russian) (Processed by L.S. Bark and E.S. Kedrova)
[4] L.N. Bol'shev, "On the construction of confidence limits" Theor. Probab. Appl. , 10 (1965) pp. 173–177 Teor. Veroyatnost. i Primenen. , 10 : 1 (1965) pp. 187–192
[5] E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1986)
How to Cite This Entry:
Neyman method of confidence intervals. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Neyman_method_of_confidence_intervals&oldid=49488
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article