Namespaces
Variants
Actions

Difference between revisions of "Pitman estimator"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
p0727301.png
 +
$#A+1 = 40 n = 0
 +
$#C+1 = 40 : ~/encyclopedia/old_files/data/P072/P.0702730 Pitman estimator
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
An [[Equivariant estimator|equivariant estimator]] for the shift parameter with respect to a group of real shifts, having minimal risk with respect to a quadratic loss function.
 
An [[Equivariant estimator|equivariant estimator]] for the shift parameter with respect to a group of real shifts, having minimal risk with respect to a quadratic loss function.
  
Let the components <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p0727301.png" /> of a random vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p0727302.png" /> be independent random variables having the same probability law, with probability density belonging to the family
+
Let the components $  X _ {1} \dots X _ {n} $
 +
of a random vector $  X = ( X _ {1} \dots X _ {n} ) $
 +
be independent random variables having the same probability law, with probability density belonging to the family
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p0727303.png" /></td> </tr></table>
+
$$
 +
\{ f( x- \theta ) , | x | < \infty , \theta \in \Theta =(- \infty , + \infty ) \} ,
 +
$$
  
 
and with
 
and with
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p0727304.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} _  \theta  X _ {1}  ^ {2}  = \int\limits _ {- \infty } ^ { {+ }  \infty } x  ^ {2} f( x-
 +
\theta )  dx  < \infty
 +
$$
  
for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p0727305.png" />. Also, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p0727306.png" /> be the group of real shifts operating in the realization space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p0727307.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p0727308.png" /> <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p0727309.png" />:
+
for any $  \theta \in \Theta $.  
 +
Also, let $  G = \{ g \} $
 +
be the group of real shifts operating in the realization space $  \mathbf R  ^ {1} = (- \infty , + \infty ) $
 +
of $  X _ {i} $
 +
$  ( i = 1 \dots n) $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273010.png" /></td> </tr></table>
+
$$
 +
= \{ {g } : {gX _ {i} = X _ {i} + g, | g | < \infty } \}
 +
.
 +
$$
  
In this case, the task of estimating <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273011.png" /> is invariant with respect to the quadratic loss function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273012.png" /> if one uses an equivariant estimator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273013.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273014.png" />, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273015.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273016.png" />. E. Pitman [[#References|[1]]] has shown that the equivariant estimator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273017.png" /> for the shift parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273018.png" /> with respect to the group <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273019.png" /> that has minimal risk with respect to the quadratic loss function takes the form
+
In this case, the task of estimating $  \theta $
 +
is invariant with respect to the quadratic loss function $  L( \theta , \widehat \theta  ) = ( \theta - \widehat \theta  )  ^ {2} $
 +
if one uses an equivariant estimator $  \widehat \theta  = \widehat \theta  ( X) $
 +
of $  \theta $,  
 +
i.e. $  \widehat \theta  ( gX) = g \widehat \theta  ( X) $
 +
for all $  g \in G $.  
 +
E. Pitman [[#References|[1]]] has shown that the equivariant estimator $  \widehat \theta  ( X) $
 +
for the shift parameter $  \theta $
 +
with respect to the group $  G $
 +
that has minimal risk with respect to the quadratic loss function takes the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273020.png" /></td> </tr></table>
+
$$
 +
\widehat \theta  ( X)  = X _ {(} n1) -
 +
\frac{\int\limits _ {- \infty } ^ { {+ }  \infty } xf( x) \prod _ { i= } 2 ^ { n }  f( x+ Y _ {i} )  dx }{\int\limits _ {- \infty } ^ { {+ }  \infty } f( x) \prod _ { i= } 2 ^ { n }  f( x+ Y _ {i} )  dx }
 +
,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273021.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273022.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273023.png" />-th [[Order statistic|order statistic]] of the observation vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273024.png" />. The Pitman estimator is unbiased (cf. [[Unbiased estimator|Unbiased estimator]]); it is a [[Minimax estimator|minimax estimator]] in the class of all estimators for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273025.png" /> with respect to the quadratic loss function if all equivariant estimators for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273026.png" /> have finite risk function [[#References|[2]]].
+
where $  Y _ {i} = X _ {(} ni) - X _ {(} n1) $,  
 +
and $  X _ {(} ni) $
 +
is the $  i $-
 +
th [[Order statistic|order statistic]] of the observation vector $  X $.  
 +
The Pitman estimator is unbiased (cf. [[Unbiased estimator|Unbiased estimator]]); it is a [[Minimax estimator|minimax estimator]] in the class of all estimators for $  \theta $
 +
with respect to the quadratic loss function if all equivariant estimators for $  \theta $
 +
have finite risk function [[#References|[2]]].
  
 
Example 1. If
 
Example 1. If
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273027.png" /></td> </tr></table>
+
$$
 +
f( x- \theta )  = e ^ {-( x- \theta ) } ,\ \
 +
x \geq  \theta ,
 +
$$
  
i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273028.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273029.png" />, has exponential distribution with unknown shift parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273030.png" />, then the Pitman estimator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273031.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273032.png" /> is
+
i.e. $  X _ {i} $,  
 +
$  i = 1 \dots n $,  
 +
has exponential distribution with unknown shift parameter $  \theta $,  
 +
then the Pitman estimator $  \widehat \theta  ( X) $
 +
for $  \theta $
 +
is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273033.png" /></td> </tr></table>
+
$$
 +
\widehat \theta  ( X)  = X _ {(} n1) -  
 +
\frac{1}{n}
 +
,
 +
$$
  
and its variance is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273034.png" />.
+
and its variance is $  1/n  ^ {2} $.
  
 
Example 2. If
 
Example 2. If
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273035.png" /></td> </tr></table>
+
$$
 +
f( x- \theta )  =
 +
\frac{1}{\sqrt {2 \pi } }
 +
e ^ {-( x- \theta )  ^ {2} /2 } ,\ \
 +
| x | < \infty ,
 +
$$
 +
 
 +
i.e. $  X _ {i} $,
 +
$  i = 1 \dots n $,
 +
has normal distribution  $  N( \theta , 1) $
 +
with unknown mathematical expectation  $  \theta $,
 +
then the arithmetic mean
  
i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273036.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273037.png" />, has normal distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273038.png" /> with unknown mathematical expectation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273039.png" />, then the arithmetic mean
+
$$
 +
\overline{X}\;  =
 +
\frac{X _ {1} + \dots + X _ {n} }{n}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p072/p072730/p07273040.png" /></td> </tr></table>
+
$$
  
 
is the Pitman estimator.
 
is the Pitman estimator.

Revision as of 08:06, 6 June 2020


An equivariant estimator for the shift parameter with respect to a group of real shifts, having minimal risk with respect to a quadratic loss function.

Let the components $ X _ {1} \dots X _ {n} $ of a random vector $ X = ( X _ {1} \dots X _ {n} ) $ be independent random variables having the same probability law, with probability density belonging to the family

$$ \{ f( x- \theta ) , | x | < \infty , \theta \in \Theta =(- \infty , + \infty ) \} , $$

and with

$$ {\mathsf E} _ \theta X _ {1} ^ {2} = \int\limits _ {- \infty } ^ { {+ } \infty } x ^ {2} f( x- \theta ) dx < \infty $$

for any $ \theta \in \Theta $. Also, let $ G = \{ g \} $ be the group of real shifts operating in the realization space $ \mathbf R ^ {1} = (- \infty , + \infty ) $ of $ X _ {i} $ $ ( i = 1 \dots n) $:

$$ G = \{ {g } : {gX _ {i} = X _ {i} + g, | g | < \infty } \} . $$

In this case, the task of estimating $ \theta $ is invariant with respect to the quadratic loss function $ L( \theta , \widehat \theta ) = ( \theta - \widehat \theta ) ^ {2} $ if one uses an equivariant estimator $ \widehat \theta = \widehat \theta ( X) $ of $ \theta $, i.e. $ \widehat \theta ( gX) = g \widehat \theta ( X) $ for all $ g \in G $. E. Pitman [1] has shown that the equivariant estimator $ \widehat \theta ( X) $ for the shift parameter $ \theta $ with respect to the group $ G $ that has minimal risk with respect to the quadratic loss function takes the form

$$ \widehat \theta ( X) = X _ {(} n1) - \frac{\int\limits _ {- \infty } ^ { {+ } \infty } xf( x) \prod _ { i= } 2 ^ { n } f( x+ Y _ {i} ) dx }{\int\limits _ {- \infty } ^ { {+ } \infty } f( x) \prod _ { i= } 2 ^ { n } f( x+ Y _ {i} ) dx } , $$

where $ Y _ {i} = X _ {(} ni) - X _ {(} n1) $, and $ X _ {(} ni) $ is the $ i $- th order statistic of the observation vector $ X $. The Pitman estimator is unbiased (cf. Unbiased estimator); it is a minimax estimator in the class of all estimators for $ \theta $ with respect to the quadratic loss function if all equivariant estimators for $ \theta $ have finite risk function [2].

Example 1. If

$$ f( x- \theta ) = e ^ {-( x- \theta ) } ,\ \ x \geq \theta , $$

i.e. $ X _ {i} $, $ i = 1 \dots n $, has exponential distribution with unknown shift parameter $ \theta $, then the Pitman estimator $ \widehat \theta ( X) $ for $ \theta $ is

$$ \widehat \theta ( X) = X _ {(} n1) - \frac{1}{n} , $$

and its variance is $ 1/n ^ {2} $.

Example 2. If

$$ f( x- \theta ) = \frac{1}{\sqrt {2 \pi } } e ^ {-( x- \theta ) ^ {2} /2 } ,\ \ | x | < \infty , $$

i.e. $ X _ {i} $, $ i = 1 \dots n $, has normal distribution $ N( \theta , 1) $ with unknown mathematical expectation $ \theta $, then the arithmetic mean

$$ \overline{X}\; = \frac{X _ {1} + \dots + X _ {n} }{n} $$

is the Pitman estimator.

References

[1] E.J. Pitman, "The estimation of the location and scale parameters of a continuous population of any given form" Biometrika , 30 (1939) pp. 391–421
[2] M.A. Girshick, L.J. Savage, "Bayes and minimax estimates for quadratic loss functions" J. Neyman (ed.) , Proc. 2-nd Berkeley Symp. Math. Statist. Prob. , Univ. California Press (1951) pp. 53–73
[3] S. Zachs, "The theory of statistical inference" , Wiley (1971)
How to Cite This Entry:
Pitman estimator. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Pitman_estimator&oldid=48182
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article