Namespaces
Variants
Actions

Difference between revisions of "Characterization theorems"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
c0217601.png
 +
$#A+1 = 27 n = 0
 +
$#C+1 = 27 : ~/encyclopedia/old_files/data/C021/C.0201760 Characterization theorems
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''in probability theory and mathematical statistics''
 
''in probability theory and mathematical statistics''
  
Line 4: Line 16:
  
 
===Example 1.===
 
===Example 1.===
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c0217601.png" /> be a three-dimensional random vector such that:
+
Let $  X $
 +
be a three-dimensional random vector such that:
 +
 
 +
1) its projections  $  X _ {1} , X _ {2} , X _ {3} $
 +
onto any three mutually-orthogonal axes are independent; and
 +
 
 +
2) the density  $  p ( x) $,
 +
$  x = ( x _ {1} , x _ {2} , x _ {3} ) $,
 +
of the probability distribution of  $  X $
 +
depends only on  $  x _ {1}  ^ {2} + x _ {2}  ^ {2} + x _ {3}  ^ {2} $.
 +
Then the distribution of  $  X $
 +
is normal and
  
1) its projections <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c0217602.png" /> onto any three mutually-orthogonal axes are independent; and
+
$$
 +
p ( x) = \
  
2) the density <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c0217603.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c0217604.png" />, of the probability distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c0217605.png" /> depends only on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c0217606.png" />. Then the distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c0217607.png" /> is normal and
+
\frac{1}{( 2 \pi ) ^ {3/2} \sigma  ^ {2} }
 +
\
 +
\mathop{\rm exp} \left \{
 +
-
 +
\frac{1}{2 \sigma  ^ {2} }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c0217608.png" /></td> </tr></table>
+
( x _ {1}  ^ {2} + x _ {2}  ^ {2} + x _ {3}  ^ {2} )
 +
\right \} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c0217609.png" /> is a certain constant (the Maxwell law for the distribution of the velocities of molecules in a gas in stationary state).
+
where $  \sigma > 0 $
 +
is a certain constant (the Maxwell law for the distribution of the velocities of molecules in a gas in stationary state).
  
 
===Example 2.===
 
===Example 2.===
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176010.png" /> be a random vector with independent and identically-distributed components <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176011.png" />. If the distribution is normal then the  "sample meansample mean"  
+
Let $  X \in \mathbf R  ^ {n} $
 +
be a random vector with independent and identically-distributed components $  X = ( X _ {1} \dots X _ {n} ) $.  
 +
If the distribution is normal then the  "sample meansample mean"  
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176012.png" /></td> </tr></table>
+
$$
 +
\overline{X}\; = \
 +
{
 +
\frac{1}{n}
 +
}
 +
\sum _ {j = 1 } ^ { n }
 +
X _ {j}  $$
  
 
and the  "sample variancesample variance"  
 
and the  "sample variancesample variance"  
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176013.png" /></td> </tr></table>
+
$$
 +
\overline{ {s  ^ {2} }}\; = \
 +
{
 +
\frac{1}{n}
 +
}
 +
\sum _ {j = 1 } ^ { n }
 +
( X _ {j} - \overline{X}\; )  ^ {2}
 +
$$
  
are independent random variables. Conversely, if they are independent, then the distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176014.png" /> is normal.
+
are independent random variables. Conversely, if they are independent, then the distribution of $  X $
 +
is normal.
  
 
===Example 3.===
 
===Example 3.===
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176015.png" /> be a vector with independent and identically-distributed components. There are non-zero constants <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176016.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176017.png" /> such that the random variables
+
Let $  X \in \mathbf R  ^ {n} $
 +
be a vector with independent and identically-distributed components. There are non-zero constants $  a _ {1} \dots a _ {n} $,  
 +
$  b _ {1} \dots b _ {n} $
 +
such that the random variables
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176018.png" /></td> </tr></table>
+
$$
 +
Y _ {1}  = \
 +
a _ {1} X _ {1} + \dots + a _ {n} X _ {n}  $$
  
 
and
 
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176019.png" /></td> </tr></table>
+
$$
 +
Y _ {2}  = \
 +
b _ {1} X _ {1} + \dots + b _ {n} X _ {n}  $$
  
are independent if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176020.png" /> has a normal distribution. The last assertion remains true if the assumption that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176021.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176022.png" /> are independent is replaced by the assumption that they are identically distributed, adding, however, certain restrictions on the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176023.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176024.png" />.
+
are independent if and only if $  X $
 +
has a normal distribution. The last assertion remains true if the assumption that $  Y _ {1} $
 +
and $  Y _ {2} $
 +
are independent is replaced by the assumption that they are identically distributed, adding, however, certain restrictions on the coefficients $  a _ {j} $
 +
and $  b _ {j} $.
  
A characterization of a similar kind of the distribution of a random vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176025.png" /> by the property of identical distribution or of independence of two polynomials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176026.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021760/c02176027.png" /> is given by a number of characterization theorems that play an important role in mathematical statistics.
+
A characterization of a similar kind of the distribution of a random vector $  X \in \mathbf R  ^ {n} $
 +
by the property of identical distribution or of independence of two polynomials $  Q _ {1} ( X) $
 +
and $  Q _ {2} ( X) $
 +
is given by a number of characterization theorems that play an important role in mathematical statistics.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.M. Kagan,  Yu.V. Linnik,  S.R. Rao,  "Characterization problems in mathematical statistics" , Wiley  (1973)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.M. Kagan,  Yu.V. Linnik,  S.R. Rao,  "Characterization problems in mathematical statistics" , Wiley  (1973)  (Translated from Russian)</TD></TR></table>

Latest revision as of 16:43, 4 June 2020


in probability theory and mathematical statistics

Theorems that establish a connection between the type of the distribution of random variables or random vectors and certain general properties of functions in them.

Example 1.

Let $ X $ be a three-dimensional random vector such that:

1) its projections $ X _ {1} , X _ {2} , X _ {3} $ onto any three mutually-orthogonal axes are independent; and

2) the density $ p ( x) $, $ x = ( x _ {1} , x _ {2} , x _ {3} ) $, of the probability distribution of $ X $ depends only on $ x _ {1} ^ {2} + x _ {2} ^ {2} + x _ {3} ^ {2} $. Then the distribution of $ X $ is normal and

$$ p ( x) = \ \frac{1}{( 2 \pi ) ^ {3/2} \sigma ^ {2} } \ \mathop{\rm exp} \left \{ - \frac{1}{2 \sigma ^ {2} } ( x _ {1} ^ {2} + x _ {2} ^ {2} + x _ {3} ^ {2} ) \right \} , $$

where $ \sigma > 0 $ is a certain constant (the Maxwell law for the distribution of the velocities of molecules in a gas in stationary state).

Example 2.

Let $ X \in \mathbf R ^ {n} $ be a random vector with independent and identically-distributed components $ X = ( X _ {1} \dots X _ {n} ) $. If the distribution is normal then the "sample meansample mean"

$$ \overline{X}\; = \ { \frac{1}{n} } \sum _ {j = 1 } ^ { n } X _ {j} $$

and the "sample variancesample variance"

$$ \overline{ {s ^ {2} }}\; = \ { \frac{1}{n} } \sum _ {j = 1 } ^ { n } ( X _ {j} - \overline{X}\; ) ^ {2} $$

are independent random variables. Conversely, if they are independent, then the distribution of $ X $ is normal.

Example 3.

Let $ X \in \mathbf R ^ {n} $ be a vector with independent and identically-distributed components. There are non-zero constants $ a _ {1} \dots a _ {n} $, $ b _ {1} \dots b _ {n} $ such that the random variables

$$ Y _ {1} = \ a _ {1} X _ {1} + \dots + a _ {n} X _ {n} $$

and

$$ Y _ {2} = \ b _ {1} X _ {1} + \dots + b _ {n} X _ {n} $$

are independent if and only if $ X $ has a normal distribution. The last assertion remains true if the assumption that $ Y _ {1} $ and $ Y _ {2} $ are independent is replaced by the assumption that they are identically distributed, adding, however, certain restrictions on the coefficients $ a _ {j} $ and $ b _ {j} $.

A characterization of a similar kind of the distribution of a random vector $ X \in \mathbf R ^ {n} $ by the property of identical distribution or of independence of two polynomials $ Q _ {1} ( X) $ and $ Q _ {2} ( X) $ is given by a number of characterization theorems that play an important role in mathematical statistics.

References

[1] A.M. Kagan, Yu.V. Linnik, S.R. Rao, "Characterization problems in mathematical statistics" , Wiley (1973) (Translated from Russian)
How to Cite This Entry:
Characterization theorems. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Characterization_theorems&oldid=14553
This article was adapted from an original article by Yu.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article