Namespaces
Variants
Actions

Difference between revisions of "Behrens-Fisher problem"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (link)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
An analytical problem which arose in the context of the statistical problem of comparing, starting from empirical data, the mathematical expectations of two normal distributions, the variances of which are unknown (it is assumed that the ratio of the variances is also unknown). This problem was posed by W.U. Behrens [[#References|[1]]] in connection with processing crop data. The modern formulation of the Behrens–Fisher problem is due to R. Fisher and is based on the concept of sufficient statistics. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b0154401.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b0154402.png" /> be mutually independent random variables with a normal distribution, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b0154403.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b0154404.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b0154405.png" />) and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b0154406.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b0154407.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b0154408.png" />). It is assumed that the values of the mathematical expectations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b0154409.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544010.png" />, of the variances <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544011.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544012.png" /> and of their ratio <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544013.png" /> are unknown. A sufficient statistic in the case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544014.png" /> is a four-dimensional vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544015.png" />, the components of which are expressed by the formulas
+
<!--
 +
b0154401.png
 +
$#A+1 = 53 n = 0
 +
$#C+1 = 53 : ~/encyclopedia/old_files/data/B015/B.0105440 Behrens\ANDFisher problem
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544016.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544017.png" /></td> </tr></table>
+
An analytical problem which arose in the context of the statistical problem of comparing, starting from empirical data, the mathematical expectations of two normal distributions, the variances of which are unknown (it is assumed that the ratio of the variances is also unknown). This problem was posed by W.U. Behrens [[#References|[1]]] in connection with processing crop data. The modern formulation of the Behrens–Fisher problem is due to R. Fisher and is based on the concept of [[sufficient statistic]]s. Let  $  X _ {11} \dots X _ {1 n _ {1}  } $
 +
and  $  X _ {21} \dots X _ {2 n _ {2}  } $
 +
be mutually independent random variables with a normal distribution, and let  $  {\mathsf E} X _ {1i} = \mu _ {1} $,
 +
$  {\mathsf E} (X _ {1i} - \mu _ {1} )  ^ {2} = \sigma _ {1}  ^ {2} $(
 +
$  i = 1 \dots n _ {1} $)
 +
and  $  {\mathsf E} X _ {2j} = \mu _ {2} $,
 +
$  {\mathsf E} (X _ {2j} - \mu _ {2} )  ^ {2} = \sigma _ {2}  ^ {2} $(
 +
$  j = 1 \dots n _ {2} $).  
 +
It is assumed that the values of the mathematical expectations  $  \mu _ {1} $,
 +
$  \mu _ {2} $,
 +
of the variances  $  \sigma _ {1}  ^ {2} $,
 +
$  \sigma _ {2}  ^ {2} $
 +
and of their ratio  $  \sigma _ {1}  ^ {2} / \sigma _ {2}  ^ {2} $
 +
are unknown. A sufficient statistic in the case  $  n _ {1} , n _ {2} \geq  2 $
 +
is a four-dimensional vector  $  ( \overline{X}\; _ {1} , \overline{X}\; _ {2} , S _ {1}  ^ {2} , S _ {2}  ^ {2} ) $,
 +
the components of which are expressed by the formulas
  
and which are mutually independent random variables; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544018.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544019.png" /> have a standard normal distribution, while <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544020.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544021.png" /> have a "chi-squared"  distribution with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544022.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544023.png" /> degrees of freedom, respectively. Since a sufficient statistic contains the same information about the unknown parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544024.png" /> as the initial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544025.png" /> random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544026.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544027.png" />, it follows that only the sufficient statistics need be considered in testing hypotheses about the values of these parameters. In particular, this idea is the basis of the modern formulation of the problem of hypotheses testing, concerning the hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544028.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544029.png" /> is a previously given number; here the Behrens–Fisher problem is reduced to finding a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544030.png" /> in the space of possible values of the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544031.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544032.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544033.png" /> such that, if the hypothesis being tested is correct, the probability of the event <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544034.png" /> does not depend on all the unknown parameters and is exactly equal to a given number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544035.png" /> in the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544036.png" />.
+
$$
 +
\overline{X}\; _ {1}  =   
 +
\frac{1}{n _ {1} }
  
The question of the existence of a solution to the Behrens–Fisher problem was discussed at length by prominent mathematicians (mainly in connection with the approach to the problem taken by R.A. Fisher, which passed beyond the borders of probability theory). It was shown by Yu.V. Linnik et al., in 1964, that if the sample sizes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544037.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544038.png" /> are of different parities, a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544039.png" /> to the Behrens–Fisher problem exists. If the parities of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544040.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544041.png" /> are equal, the existence of a solution remains an open question.
+
\sum _ { i=1 } ^ { {n _ 1} }
 +
X _ {1i} ,\ \
 +
\overline{X}\; _ {2}  = \
  
The Behrens–Fisher problem has often been generalized and modified. A. Wald, in particular, posed the problem of finding a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544042.png" /> in the sample space of the two variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544043.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544044.png" />. The question of the existence of a solution to this problem remains open. However, it is effectively possible to construct a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544045.png" /> such that if the hypothesis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544046.png" /> being tested is in fact correct, the probability of the event <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544047.png" />, while still depending on the unknown ratio <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544048.png" />, will deviate from the given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544049.png" /> only by a small amount. This fact is the basis of modern recommendations for the practical construction of tests to compare <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544050.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544051.png" />. Simple tests for the comparison of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544052.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b015/b015440/b01544053.png" />, which are also computationally convenient, were proposed by V.T. Romanovskii, M. Bartlett, H. Scheffe and others. However, the statistics of these tests are not expressed in terms of sufficient statistics and are, for this reason, usually less powerful than tests based on the solution of the Behrens–Fisher problem and its generalizations.
+
\frac{1}{n _ {2} }
 +
 
 +
\sum _ { j=1 } ^ { {n _ 2} }
 +
X _ {2j} ,
 +
$$
 +
 
 +
$$
 +
S _ {1}  ^ {2}  =  \sum _ { i=1 } ^ { {n _ 1} } (X _ {1i} - \overline{X}\; _ {1} )  ^ {2} ,\  S _ {2}  ^ {2}  =  \sum _ { j=1 } ^ { {n _ 2} } (X _ {2j} - \overline{X}\; _ {2} )  ^ {2} ,
 +
$$
 +
 
 +
and which are mutually independent random variables;  $  \sqrt {n _ {1} } ( {\overline{X}\; } _ {1} - \mu _ {1} ) / \sigma _ {1} $
 +
and  $  \sqrt {n _ {2} } ( {\overline{X}\; } _ {2} - \mu _ {2} ) / \sigma _ {2} $
 +
have a standard normal distribution, while  $  S _ {1}  ^ {2} / \sigma _ {1}  ^ {2} $
 +
and  $  S _ {2}  ^ {2} / \sigma _ {2}  ^ {2} $
 +
have a  "chi-squared"  distribution with  $  n _ {1} - 1 $
 +
and  $  n _ {2} - 1 $
 +
degrees of freedom, respectively. Since a sufficient statistic contains the same information about the unknown parameters  $  \mu _ {1} , \mu _ {2} , \sigma _ {1}  ^ {2} , \sigma _ {2}  ^ {2} $
 +
as the initial  $  n _ {1} + n _ {2} $
 +
random variables  $  X _ {1i} $
 +
and  $  X _ {2j} $,
 +
it follows that only the sufficient statistics need be considered in testing hypotheses about the values of these parameters. In particular, this idea is the basis of the modern formulation of the problem of hypotheses testing, concerning the hypothesis  $  \mu _ {1} - \mu _ {2} = \Delta $,
 +
where  $  \Delta $
 +
is a previously given number; here the Behrens–Fisher problem is reduced to finding a set  $  K _  \alpha  $
 +
in the space of possible values of the random variables  $  \overline{X}\; _ {1} - \overline{X}\; _ {2} $,
 +
$  S _ {1}  ^ {2} $,
 +
$  S _ {2}  ^ {2} $
 +
such that, if the hypothesis being tested is correct, the probability of the event  $  ( \overline{X}\; _ {1} - \overline{X}\; _ {2} , S _ {1}  ^ {2} , S _ {2}  ^ {2} ) \in K _  \alpha  $
 +
does not depend on all the unknown parameters and is exactly equal to a given number  $  \alpha $
 +
in the interval  $  0 < \alpha < 1 $.
 +
 
 +
The question of the existence of a solution to the Behrens–Fisher problem was discussed at length by prominent mathematicians (mainly in connection with the approach to the problem taken by R.A. Fisher, which passed beyond the borders of probability theory). It was shown by Yu.V. Linnik et al., in 1964, that if the sample sizes  $  n _ {1} $
 +
and  $  n _ {2} $
 +
are of different parities, a solution  $  K _  \alpha  $
 +
to the Behrens–Fisher problem exists. If the parities of  $  n _ {1} $
 +
and  $  n _ {2} $
 +
are equal, the existence of a solution remains an open question.
 +
 
 +
The Behrens–Fisher problem has often been generalized and modified. A. Wald, in particular, posed the problem of finding a set $  K _  \alpha  $
 +
in the sample space of the two variables $  ( \overline{X}\; _ {1} - \overline{X}\; _ {2} )/ S _ {1}  ^ {2} $
 +
and $  S _ {1}  ^ {2} / S _ {2}  ^ {2} $.  
 +
The question of the existence of a solution to this problem remains open. However, it is effectively possible to construct a set $  K _  \alpha  ^ {*} $
 +
such that if the hypothesis $  \mu _ {1} - \mu _ {2} = \Delta $
 +
being tested is in fact correct, the probability of the event $  (( \overline{X}\; _ {1} - \overline{X}\; _ {2} )/S _ {1}  ^ {2} , S _ {1}  ^ {2} / S _ {2}  ^ {2} ) \in K _  \alpha  ^ {*} $,  
 +
while still depending on the unknown ratio $  \sigma _ {1}  ^ {2} / \sigma _ {2}  ^ {2} $,  
 +
will deviate from the given $  \alpha $
 +
only by a small amount. This fact is the basis of modern recommendations for the practical construction of tests to compare $  \mu _ {1} $
 +
and $  \mu _ {2} $.  
 +
Simple tests for the comparison of $  \mu _ {1} $
 +
with $  \mu _ {2} $,  
 +
which are also computationally convenient, were proposed by V.T. Romanovskii, M. Bartlett, H. Scheffe and others. However, the statistics of these tests are not expressed in terms of sufficient statistics and are, for this reason, usually less powerful than tests based on the solution of the Behrens–Fisher problem and its generalizations.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  W.U. Behrens,  ''Landwirtsch. Jahresber.'' , '''68''' :  6  (1929)  pp. 807–837</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  Yu.V. Linnik,  "Statistical problems with nuisance parameters" , Amer. Math. Soc.  (1968)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  Yu.V. Linnik,  I.V. Romanovskii,  V.N. Sudakov,  "A nonrandomized homogeneous test in the Behrens–Fisher problem"  ''Soviet Math. Dokl.'' , '''5''' :  2  (1964)  pp. 570–572  ''Dokl. Akad. Nauk SSSR'' , '''155''' :  6  (1964)  pp. 1262–1264</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  W.U. Behrens,  ''Landwirtsch. Jahresber.'' , '''68''' :  6  (1929)  pp. 807–837</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  Yu.V. Linnik,  "Statistical problems with nuisance parameters" , Amer. Math. Soc.  (1968)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  Yu.V. Linnik,  I.V. Romanovskii,  V.N. Sudakov,  "A nonrandomized homogeneous test in the Behrens–Fisher problem"  ''Soviet Math. Dokl.'' , '''5''' :  2  (1964)  pp. 570–572  ''Dokl. Akad. Nauk SSSR'' , '''155''' :  6  (1964)  pp. 1262–1264</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  Yu.V. Linnik,  "Randomized homogeneous tests for the Behrens–Fisher problem"  ''Selected Transl. in Math. Stat. and Probab.'' , '''6'''  (1966)  pp. 207–217  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  Yu.V. Linnik,  "Randomized homogeneous tests for the Behrens–Fisher problem"  ''Selected Transl. in Math. Stat. and Probab.'' , '''6'''  (1966)  pp. 207–217  (Translated from Russian)</TD></TR></table>

Latest revision as of 10:27, 16 July 2021


An analytical problem which arose in the context of the statistical problem of comparing, starting from empirical data, the mathematical expectations of two normal distributions, the variances of which are unknown (it is assumed that the ratio of the variances is also unknown). This problem was posed by W.U. Behrens [1] in connection with processing crop data. The modern formulation of the Behrens–Fisher problem is due to R. Fisher and is based on the concept of sufficient statistics. Let $ X _ {11} \dots X _ {1 n _ {1} } $ and $ X _ {21} \dots X _ {2 n _ {2} } $ be mutually independent random variables with a normal distribution, and let $ {\mathsf E} X _ {1i} = \mu _ {1} $, $ {\mathsf E} (X _ {1i} - \mu _ {1} ) ^ {2} = \sigma _ {1} ^ {2} $( $ i = 1 \dots n _ {1} $) and $ {\mathsf E} X _ {2j} = \mu _ {2} $, $ {\mathsf E} (X _ {2j} - \mu _ {2} ) ^ {2} = \sigma _ {2} ^ {2} $( $ j = 1 \dots n _ {2} $). It is assumed that the values of the mathematical expectations $ \mu _ {1} $, $ \mu _ {2} $, of the variances $ \sigma _ {1} ^ {2} $, $ \sigma _ {2} ^ {2} $ and of their ratio $ \sigma _ {1} ^ {2} / \sigma _ {2} ^ {2} $ are unknown. A sufficient statistic in the case $ n _ {1} , n _ {2} \geq 2 $ is a four-dimensional vector $ ( \overline{X}\; _ {1} , \overline{X}\; _ {2} , S _ {1} ^ {2} , S _ {2} ^ {2} ) $, the components of which are expressed by the formulas

$$ \overline{X}\; _ {1} = \frac{1}{n _ {1} } \sum _ { i=1 } ^ { {n _ 1} } X _ {1i} ,\ \ \overline{X}\; _ {2} = \ \frac{1}{n _ {2} } \sum _ { j=1 } ^ { {n _ 2} } X _ {2j} , $$

$$ S _ {1} ^ {2} = \sum _ { i=1 } ^ { {n _ 1} } (X _ {1i} - \overline{X}\; _ {1} ) ^ {2} ,\ S _ {2} ^ {2} = \sum _ { j=1 } ^ { {n _ 2} } (X _ {2j} - \overline{X}\; _ {2} ) ^ {2} , $$

and which are mutually independent random variables; $ \sqrt {n _ {1} } ( {\overline{X}\; } _ {1} - \mu _ {1} ) / \sigma _ {1} $ and $ \sqrt {n _ {2} } ( {\overline{X}\; } _ {2} - \mu _ {2} ) / \sigma _ {2} $ have a standard normal distribution, while $ S _ {1} ^ {2} / \sigma _ {1} ^ {2} $ and $ S _ {2} ^ {2} / \sigma _ {2} ^ {2} $ have a "chi-squared" distribution with $ n _ {1} - 1 $ and $ n _ {2} - 1 $ degrees of freedom, respectively. Since a sufficient statistic contains the same information about the unknown parameters $ \mu _ {1} , \mu _ {2} , \sigma _ {1} ^ {2} , \sigma _ {2} ^ {2} $ as the initial $ n _ {1} + n _ {2} $ random variables $ X _ {1i} $ and $ X _ {2j} $, it follows that only the sufficient statistics need be considered in testing hypotheses about the values of these parameters. In particular, this idea is the basis of the modern formulation of the problem of hypotheses testing, concerning the hypothesis $ \mu _ {1} - \mu _ {2} = \Delta $, where $ \Delta $ is a previously given number; here the Behrens–Fisher problem is reduced to finding a set $ K _ \alpha $ in the space of possible values of the random variables $ \overline{X}\; _ {1} - \overline{X}\; _ {2} $, $ S _ {1} ^ {2} $, $ S _ {2} ^ {2} $ such that, if the hypothesis being tested is correct, the probability of the event $ ( \overline{X}\; _ {1} - \overline{X}\; _ {2} , S _ {1} ^ {2} , S _ {2} ^ {2} ) \in K _ \alpha $ does not depend on all the unknown parameters and is exactly equal to a given number $ \alpha $ in the interval $ 0 < \alpha < 1 $.

The question of the existence of a solution to the Behrens–Fisher problem was discussed at length by prominent mathematicians (mainly in connection with the approach to the problem taken by R.A. Fisher, which passed beyond the borders of probability theory). It was shown by Yu.V. Linnik et al., in 1964, that if the sample sizes $ n _ {1} $ and $ n _ {2} $ are of different parities, a solution $ K _ \alpha $ to the Behrens–Fisher problem exists. If the parities of $ n _ {1} $ and $ n _ {2} $ are equal, the existence of a solution remains an open question.

The Behrens–Fisher problem has often been generalized and modified. A. Wald, in particular, posed the problem of finding a set $ K _ \alpha $ in the sample space of the two variables $ ( \overline{X}\; _ {1} - \overline{X}\; _ {2} )/ S _ {1} ^ {2} $ and $ S _ {1} ^ {2} / S _ {2} ^ {2} $. The question of the existence of a solution to this problem remains open. However, it is effectively possible to construct a set $ K _ \alpha ^ {*} $ such that if the hypothesis $ \mu _ {1} - \mu _ {2} = \Delta $ being tested is in fact correct, the probability of the event $ (( \overline{X}\; _ {1} - \overline{X}\; _ {2} )/S _ {1} ^ {2} , S _ {1} ^ {2} / S _ {2} ^ {2} ) \in K _ \alpha ^ {*} $, while still depending on the unknown ratio $ \sigma _ {1} ^ {2} / \sigma _ {2} ^ {2} $, will deviate from the given $ \alpha $ only by a small amount. This fact is the basis of modern recommendations for the practical construction of tests to compare $ \mu _ {1} $ and $ \mu _ {2} $. Simple tests for the comparison of $ \mu _ {1} $ with $ \mu _ {2} $, which are also computationally convenient, were proposed by V.T. Romanovskii, M. Bartlett, H. Scheffe and others. However, the statistics of these tests are not expressed in terms of sufficient statistics and are, for this reason, usually less powerful than tests based on the solution of the Behrens–Fisher problem and its generalizations.

References

[1] W.U. Behrens, Landwirtsch. Jahresber. , 68 : 6 (1929) pp. 807–837
[2] Yu.V. Linnik, "Statistical problems with nuisance parameters" , Amer. Math. Soc. (1968) (Translated from Russian)
[3] Yu.V. Linnik, I.V. Romanovskii, V.N. Sudakov, "A nonrandomized homogeneous test in the Behrens–Fisher problem" Soviet Math. Dokl. , 5 : 2 (1964) pp. 570–572 Dokl. Akad. Nauk SSSR , 155 : 6 (1964) pp. 1262–1264

Comments

References

[a1] Yu.V. Linnik, "Randomized homogeneous tests for the Behrens–Fisher problem" Selected Transl. in Math. Stat. and Probab. , 6 (1966) pp. 207–217 (Translated from Russian)
How to Cite This Entry:
Behrens-Fisher problem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Behrens-Fisher_problem&oldid=22079
This article was adapted from an original article by L.N. Bol'shev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article