Namespaces
Variants
Actions

Difference between revisions of "Bahadur efficiency"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
The large sample study of test statistics in a given hypotheses testing problem is commonly based on the following concept of asymptotic Bahadur efficiency [[#References|[a1]]], [[#References|[a2]]] (cf. also [[Statistical hypotheses, verification of|Statistical hypotheses, verification of]]). Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b1100401.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b1100402.png" /> be the parametric sets corresponding to the null hypothesis and its alternative, respectively. Assume that large values of a test statistic (cf. [[Test statistics|Test statistics]]) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b1100403.png" /> based on a random sample <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b1100404.png" /> give evidence against the null hypothesis. For a fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b1100405.png" /> and a real number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b1100406.png" />, put <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b1100407.png" /> and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b1100408.png" />. The random quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b1100409.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004010.png" />-value corresponding to the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004011.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004012.png" /> is the true parametric value. For example, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004013.png" />, the null hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004014.png" /> is rejected at the [[Significance level|significance level]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004015.png" />. If for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004016.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004017.png" />-probability one,
+
<!--
 +
b1100401.png
 +
$#A+1 = 60 n = 0
 +
$#C+1 = 60 : ~/encyclopedia/old_files/data/B110/B.1100040 Bahadur efficiency
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004018.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004019.png" /> is called the Bahadur exact slope of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004020.png" />. The larger the Bahadur exact slope, the faster the rate of decay of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004021.png" />-value under the alternative. It is known that for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004022.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004023.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004024.png" /> is the information number corresponding to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004025.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004026.png" />. A test statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004027.png" /> is called Bahadur efficient at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004028.png" /> if
+
The large sample study of test statistics in a given hypotheses testing problem is commonly based on the following concept of asymptotic Bahadur efficiency [[#References|[a1]]], [[#References|[a2]]] (cf. also [[Statistical hypotheses, verification of|Statistical hypotheses, verification of]]). Let  $  \Theta _ {0} $
 +
and  $  \Theta _ {1} $
 +
be the parametric sets corresponding to the null hypothesis and its alternative, respectively. Assume that large values of a test statistic (cf. [[Test statistics|Test statistics]])  $  T _ {n} = T _ {n} ( \mathbf x ) $
 +
based on a random sample  $  \mathbf x = ( x _ {1} \dots x _ {n} ) $
 +
give evidence against the null hypothesis. For a fixed  $  \theta \in \Theta _ {0} $
 +
and a real number  $  t $,
 +
put  $  F _ {n} ( t \mid  \theta ) = {\mathsf P} _  \theta  ( T _ {n} < t ) $
 +
and let  $  L _ {n} ( t \mid  \theta ) = 1 - F _ {n} ( t \mid  \theta ) $.  
 +
The random quantity  $  L _ {n} ( T _ {n} ( \mathbf x ) \mid  \theta _ {0} ) $
 +
is the $  {\mathsf P} $-
 +
value corresponding to the statistic  $  T $
 +
when  $  \theta _ {0} $
 +
is the true parametric value. For example, if  $  L _ {n} ( T _ {n} ( \mathbf x ) \mid  \theta _ {0} ) < \alpha $,
 +
the null hypothesis  $  \Theta _ {0} = \{ \theta _ {0} \} $
 +
is rejected at the [[Significance level|significance level]]  $  \alpha $.  
 +
If for  $  \eta \in \Theta _ {1} $
 +
with  $  {\mathsf P} _  \eta  $-
 +
probability one,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004029.png" /></td> </tr></table>
+
$$
 +
{\lim\limits } 2n ^ {- 1 } { \mathop{\rm log} } L _ {n} ( T ( \mathbf x ) \mid  \theta ) = - d ( \eta \mid  \theta ) ,
 +
$$
  
The concept of Bahadur efficiency allows one to compare two (sequences of) test statistics <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004030.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004031.png" /> from the following perspective. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004032.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004033.png" />, be the smallest sample size required to reject <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004034.png" /> at the significance level <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004035.png" /> on the basis of a random sample <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004036.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004037.png" /> is the true parametric value. The ratio <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004038.png" /> gives a measure of relative efficiency of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004039.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004040.png" />. To reduce the number of arguments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004041.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004042.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004043.png" />, one usually considers the random variable which is the limit of this ratio, as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004044.png" />. In many situations this limit does not depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004045.png" />, so it represents the efficiency of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004046.png" /> against <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004047.png" /> at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004048.png" /> with the convenient formula
+
then  $  d ( \eta \mid  \theta ) $
 +
is called the Bahadur exact slope of $  T $.  
 +
The larger the Bahadur exact slope, the faster the rate of decay of the $  {\mathsf P} $-
 +
value under the alternative. It is known that for any  $  T $,  
 +
$  d ( \eta \mid  \theta ) \leq 2K ( \eta, \theta ) $,  
 +
where  $  K ( \eta, \theta ) $
 +
is the information number corresponding to  $  {\mathsf P} _  \eta  $
 +
and  $  {\mathsf P} _  \theta  $.  
 +
A test statistic  $  T $
 +
is called Bahadur efficient at $  \eta $
 +
if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004049.png" /></td> </tr></table>
+
$$
 +
e _ {T} ( \eta ) = \inf  _  \theta  {
 +
\frac{1}{2}
 +
} d ( \eta \mid  \theta ) = \inf  _  \theta  K ( \eta, \theta ) .
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004050.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004051.png" /> are the corresponding Bahadur slopes.
+
The concept of Bahadur efficiency allows one to compare two (sequences of) test statistics  $  T ^ {( 1 ) } $
 +
and  $  T ^ {( 2 ) } $
 +
from the following perspective. Let  $  N _ {i} $,
 +
$  i = 1,2 $,
 +
be the smallest sample size required to reject  $  \Theta _ {0} $
 +
at the significance level  $  \alpha $
 +
on the basis of a random sample  $  \mathbf x = ( x _ {1} , \dots ) $
 +
when  $  \eta $
 +
is the true parametric value. The ratio  $  { {N _ {2} } / {N _ {1} } } $
 +
gives a measure of relative efficiency of  $  T ^ {( 1 ) } $
 +
to  $  T ^ {( 2 ) } $.  
 +
To reduce the number of arguments  $  \alpha $,
 +
$  \mathbf x $
 +
and $  \eta $,
 +
one usually considers the random variable which is the limit of this ratio, as  $  \alpha \rightarrow 0 $.  
 +
In many situations this limit does not depend on  $  \mathbf x $,
 +
so it represents the efficiency of  $  T ^ {( 1 ) } $
 +
against  $  T ^ {( 2 ) } $
 +
at  $  \eta $
 +
with the convenient formula
  
To evaluate the exact slope, the following result ([[#References|[a2]]], Thm. 7.2) is commonly used. Assume that for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004052.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004053.png" />-probability one as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004054.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004055.png" /> and the limit <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004056.png" /> exists for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004057.png" /> taking values in an open interval and is a continuous function there. Then the exact slope of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004058.png" /> at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004059.png" /> has the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b110/b110040/b11004060.png" />. See [[#References|[a4]]] for generalizations of this formula.
+
$$
 +
{\lim\limits } _ {\alpha \rightarrow 0 } {
 +
\frac{N _ {2} }{N _ {1} }
 +
} = {
 +
\frac{d _ {1} ( \eta \mid  \theta _ {0} ) }{d _ {2} ( \eta \mid  \theta _ {0} ) }
 +
} ,
 +
$$
 +
 
 +
where  $  d _ {1} $
 +
and  $  d _ {2} $
 +
are the corresponding Bahadur slopes.
 +
 
 +
To evaluate the exact slope, the following result ([[#References|[a2]]], Thm. 7.2) is commonly used. Assume that for any $  \eta $
 +
with $  {\mathsf P} _  \eta  $-
 +
probability one as $  n \rightarrow \infty $,  
 +
$  T _ {n} ( \mathbf x ) \rightarrow b ( \eta ) $
 +
and the limit $  g _  \theta  ( t ) = {\lim\limits } L _ {n} ( t \mid  \theta ) $
 +
exists for $  t $
 +
taking values in an open interval and is a continuous function there. Then the exact slope of $  T $
 +
at $  ( \eta, \theta ) $
 +
has the form $  d ( \eta \mid  \theta ) = g _  \theta  ( b ( \eta ) ) $.  
 +
See [[#References|[a4]]] for generalizations of this formula.
  
 
The exact Bahadur slopes of many classical tests have been found. See [[#References|[a3]]].
 
The exact Bahadur slopes of many classical tests have been found. See [[#References|[a3]]].

Revision as of 10:26, 27 April 2020


The large sample study of test statistics in a given hypotheses testing problem is commonly based on the following concept of asymptotic Bahadur efficiency [a1], [a2] (cf. also Statistical hypotheses, verification of). Let $ \Theta _ {0} $ and $ \Theta _ {1} $ be the parametric sets corresponding to the null hypothesis and its alternative, respectively. Assume that large values of a test statistic (cf. Test statistics) $ T _ {n} = T _ {n} ( \mathbf x ) $ based on a random sample $ \mathbf x = ( x _ {1} \dots x _ {n} ) $ give evidence against the null hypothesis. For a fixed $ \theta \in \Theta _ {0} $ and a real number $ t $, put $ F _ {n} ( t \mid \theta ) = {\mathsf P} _ \theta ( T _ {n} < t ) $ and let $ L _ {n} ( t \mid \theta ) = 1 - F _ {n} ( t \mid \theta ) $. The random quantity $ L _ {n} ( T _ {n} ( \mathbf x ) \mid \theta _ {0} ) $ is the $ {\mathsf P} $- value corresponding to the statistic $ T $ when $ \theta _ {0} $ is the true parametric value. For example, if $ L _ {n} ( T _ {n} ( \mathbf x ) \mid \theta _ {0} ) < \alpha $, the null hypothesis $ \Theta _ {0} = \{ \theta _ {0} \} $ is rejected at the significance level $ \alpha $. If for $ \eta \in \Theta _ {1} $ with $ {\mathsf P} _ \eta $- probability one,

$$ {\lim\limits } 2n ^ {- 1 } { \mathop{\rm log} } L _ {n} ( T ( \mathbf x ) \mid \theta ) = - d ( \eta \mid \theta ) , $$

then $ d ( \eta \mid \theta ) $ is called the Bahadur exact slope of $ T $. The larger the Bahadur exact slope, the faster the rate of decay of the $ {\mathsf P} $- value under the alternative. It is known that for any $ T $, $ d ( \eta \mid \theta ) \leq 2K ( \eta, \theta ) $, where $ K ( \eta, \theta ) $ is the information number corresponding to $ {\mathsf P} _ \eta $ and $ {\mathsf P} _ \theta $. A test statistic $ T $ is called Bahadur efficient at $ \eta $ if

$$ e _ {T} ( \eta ) = \inf _ \theta { \frac{1}{2} } d ( \eta \mid \theta ) = \inf _ \theta K ( \eta, \theta ) . $$

The concept of Bahadur efficiency allows one to compare two (sequences of) test statistics $ T ^ {( 1 ) } $ and $ T ^ {( 2 ) } $ from the following perspective. Let $ N _ {i} $, $ i = 1,2 $, be the smallest sample size required to reject $ \Theta _ {0} $ at the significance level $ \alpha $ on the basis of a random sample $ \mathbf x = ( x _ {1} , \dots ) $ when $ \eta $ is the true parametric value. The ratio $ { {N _ {2} } / {N _ {1} } } $ gives a measure of relative efficiency of $ T ^ {( 1 ) } $ to $ T ^ {( 2 ) } $. To reduce the number of arguments $ \alpha $, $ \mathbf x $ and $ \eta $, one usually considers the random variable which is the limit of this ratio, as $ \alpha \rightarrow 0 $. In many situations this limit does not depend on $ \mathbf x $, so it represents the efficiency of $ T ^ {( 1 ) } $ against $ T ^ {( 2 ) } $ at $ \eta $ with the convenient formula

$$ {\lim\limits } _ {\alpha \rightarrow 0 } { \frac{N _ {2} }{N _ {1} } } = { \frac{d _ {1} ( \eta \mid \theta _ {0} ) }{d _ {2} ( \eta \mid \theta _ {0} ) } } , $$

where $ d _ {1} $ and $ d _ {2} $ are the corresponding Bahadur slopes.

To evaluate the exact slope, the following result ([a2], Thm. 7.2) is commonly used. Assume that for any $ \eta $ with $ {\mathsf P} _ \eta $- probability one as $ n \rightarrow \infty $, $ T _ {n} ( \mathbf x ) \rightarrow b ( \eta ) $ and the limit $ g _ \theta ( t ) = {\lim\limits } L _ {n} ( t \mid \theta ) $ exists for $ t $ taking values in an open interval and is a continuous function there. Then the exact slope of $ T $ at $ ( \eta, \theta ) $ has the form $ d ( \eta \mid \theta ) = g _ \theta ( b ( \eta ) ) $. See [a4] for generalizations of this formula.

The exact Bahadur slopes of many classical tests have been found. See [a3].

References

[a1] R.R. Bahadur, "Rates of convergence of estimates and tests statistics" Ann. Math. Stat. , 38 (1967) pp. 303–324
[a2] R.R. Bahadur, "Some limit theorems in statistics" , Regional Conf. Ser. Applied Math. , SIAM (1971)
[a3] Ya.Yu. Nikitin, "Asymptotic efficiency of nonparametric tests" , Cambridge Univ. Press (1995)
[a4] L.J. Gleser, "Large deviation indices and Bahadur exact slopes" Statistics and Decision , 1 (1984) pp. 193–204
How to Cite This Entry:
Bahadur efficiency. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bahadur_efficiency&oldid=45582
This article was adapted from an original article by A.L. Rukhin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article