Namespaces
Variants
Actions

Difference between revisions of "Sheppard corrections"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
s0848801.png
 +
$#A+1 = 45 n = 0
 +
$#C+1 = 45 : ~/encyclopedia/old_files/data/S084/S.0804880 Sheppard corrections
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''for moments''
 
''for moments''
  
 
Corrections to the discretization of the realizations of continuous random variables, used in order to diminish systematic errors in the problem of estimating the moments of the continuous random variables under a given system of rounding-off. Such corrections were first proposed by W.F. Sheppard [[#References|[1]]].
 
Corrections to the discretization of the realizations of continuous random variables, used in order to diminish systematic errors in the problem of estimating the moments of the continuous random variables under a given system of rounding-off. Such corrections were first proposed by W.F. Sheppard [[#References|[1]]].
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s0848801.png" /> be a continuously-distributed random variable for which the probability density <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s0848802.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s0848803.png" />, has an everywhere continuous derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s0848804.png" /> of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s0848805.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s0848806.png" /> such that
+
Let $  X $
 +
be a continuously-distributed random variable for which the probability density $  p( x) $,  
 +
$  x \in \mathbf R  ^ {1} $,  
 +
has an everywhere continuous derivative $  p  ^ {(} s) ( x) $
 +
of order $  s $
 +
on  $  \mathbf R  ^ {1} $
 +
such that
 +
 
 +
$$
 +
p  ^ {(} s) ( x)  = O( | x | ^ {- 1- \delta } ) \  \textrm{ as }  x \rightarrow \infty
 +
$$
 +
 
 +
for some  $  \delta > 0 $,
 +
and let the moments (cf. [[Moment|Moment]])  $  \alpha _ {k} = {\mathsf E} X  ^ {k} $
 +
exist. Further, let a system of rounding-off the results of observations be given (i.e. an origin  $  x _ {0} $
 +
and a step  $  h $,
 +
$  h > 0 $,
 +
are given), the choice of which leads to the situation, when instead of the realizations of the initial continuous random variable  $  X $,
 +
in reality one observes realizations  $  x _ {m} = x _ {0} + mh $,
 +
$  m = 0, \pm  1 , \pm  2 \dots $
 +
of a discrete random variable
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s0848807.png" /></td> </tr></table>
+
$$
 +
= x _ {0} + h \left [
 +
\frac{X- x _ {0} }{h}
 +
+
 +
\frac{1}{2}
 +
\right ] ,
 +
$$
  
for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s0848808.png" />, and let the moments (cf. [[Moment|Moment]]) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s0848809.png" /> exist. Further, let a system of rounding-off the results of observations be given (i.e. an origin <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488010.png" /> and a step <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488011.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488012.png" />, are given), the choice of which leads to the situation, when instead of the realizations of the initial continuous random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488013.png" />, in reality one observes realizations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488015.png" /> of a discrete random variable
+
where  $  [ a] $
 +
is the integer part of $  a $.  
 +
The moments  $  a _ {i} = {\mathsf E} Y  ^ {i} $,  
 +
$  i = 1 \dots k $,  
 +
of  $  Y $
 +
are computed from the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488016.png" /></td> </tr></table>
+
$$
 +
a _ {i}  = \sum _ {m=- \infty } ^ { {+ }  \infty } x _ {m}  ^ {i} {\mathsf P}
 +
\{ Y = x _ {m} \} =
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488017.png" /> is the integer part of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488018.png" />. The moments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488019.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488020.png" />, of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488021.png" /> are computed from the formula
+
$$
 +
= \
 +
\sum _ { m = - \infty } ^ { {+ }  \infty } x _ {m}  ^ {i} \int\limits _ {x _ {m} - h/2 } ^ { {x _ m } + h/2 } p( x)  dx.
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488022.png" /></td> </tr></table>
+
Generally speaking,  $  a _ {i} \neq \alpha _ {i} $.
 +
Thus a question arises: Is it possible to adjust the moments  $  a _ {1} \dots a _ {k} $
 +
in order to obtain  "good" approximations to the moments  $  \alpha _ {1} \dots \alpha _ {k} $?
 +
The Sheppard corrections give a positive answer to this question.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488023.png" /></td> </tr></table>
+
Let  $  g( t) $
 +
be the [[Characteristic function|characteristic function]] of the random variable  $  X $,
 +
let  $  f( t) $
 +
be the characteristic function of the random variable  $  Y $,
 +
and let
  
Generally speaking, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488024.png" />. Thus a question arises: Is it possible to adjust the moments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488025.png" /> in order to obtain "good" approximations to the moments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488026.png" />? The Sheppard corrections give a positive answer to this question.
+
$$
 +
\phi ( t)  = {\mathsf E} e ^ {it \eta }  =   
 +
\frac{2}{th}
 +
  \sin  
 +
\frac{th}{2}
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488027.png" /> be the [[Characteristic function|characteristic function]] of the random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488028.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488029.png" /> be the characteristic function of the random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488030.png" />, and let
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488031.png" /></td> </tr></table>
+
be the characteristic function of a random variable  $  \eta $
 +
which is uniformly distributed on  $  [- h/2, h/2] $
 +
and which is stochastically independent of  $  X $.  
 +
Under these conditions, for a small  $  h $,
  
be the characteristic function of a random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488032.png" /> which is uniformly distributed on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488033.png" /> and which is stochastically independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488034.png" />. Under these conditions, for a small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488035.png" />,
+
$$
 +
f( t)  = g( t) \phi ( t) + O( h  ^ {s-} 1 ),
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488036.png" /></td> </tr></table>
+
hence the moments of the discrete random variable  $  Y $
 +
coincide up to  $  O( h  ^ {s-} 1 ) $
 +
with the moments of the random variable  $  X + \eta $
 +
and, thus, up to  $  O( h  ^ {s-} 1 ) $,
 +
the following equalities hold:
  
hence the moments of the discrete random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488037.png" /> coincide up to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488038.png" /> with the moments of the random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488039.png" /> and, thus, up to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488040.png" />, the following equalities hold:
+
$$
 +
\alpha _ {1}  = a _ {1} ,\ \
 +
\alpha _ {4}  = a _ {4} -
 +
\frac{1}{2}
 +
a _ {2} h  ^ {2} +
 +
\frac{7}{240}
 +
h  ^ {4} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488041.png" /></td> </tr></table>
+
$$
 +
\alpha _ {2}  = a _ {2} -
 +
\frac{1}{12}
 +
h  ^ {2} ,\  \alpha _ {5}  = a _ {5} -  
 +
\frac{5}{6}
 +
a _ {3} h  ^ {2} +
 +
\frac{7}{48}
 +
a _ {1} h  ^ {4} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488042.png" /></td> </tr></table>
+
$$
 +
\alpha _ {3}  = a _ {3} -  
 +
\frac{1}{4}
 +
a _ {1} h  ^ {2} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488043.png" /></td> </tr></table>
+
$$
 +
\alpha _ {6}  = a _ {6} -  
 +
\frac{5}{4}
 +
a _ {4} h  ^ {2} +
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488044.png" /></td> </tr></table>
+
\frac{7}{16}
 +
a _ {2} h  ^ {4} -  
 +
\frac{31}{1344}
 +
h  ^ {6} \dots
 +
$$
  
which contain the so-called Sheppard corrections for the moments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084880/s08488045.png" />.
+
which contain the so-called Sheppard corrections for the moments $  a _ {1} \dots a _ {k} $.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  W.F. Sheppard,  "On the calculation of the most probable values of frequency-constants, for data arranged according to equidistant divisions of a scale"  ''Proc. Lond. Math. Soc.'' , '''29'''  (1898)  pp. 353–380</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  H. Cramér,  "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  S.S. Wilks,  "Mathematical statistics" , Wiley  (1962)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  B.L. van der Waerden,  "Mathematische Statistik" , Springer  (1957)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  W.F. Sheppard,  "On the calculation of the most probable values of frequency-constants, for data arranged according to equidistant divisions of a scale"  ''Proc. Lond. Math. Soc.'' , '''29'''  (1898)  pp. 353–380</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  H. Cramér,  "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  S.S. Wilks,  "Mathematical statistics" , Wiley  (1962)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  B.L. van der Waerden,  "Mathematische Statistik" , Springer  (1957)</TD></TR></table>

Revision as of 08:13, 6 June 2020


for moments

Corrections to the discretization of the realizations of continuous random variables, used in order to diminish systematic errors in the problem of estimating the moments of the continuous random variables under a given system of rounding-off. Such corrections were first proposed by W.F. Sheppard [1].

Let $ X $ be a continuously-distributed random variable for which the probability density $ p( x) $, $ x \in \mathbf R ^ {1} $, has an everywhere continuous derivative $ p ^ {(} s) ( x) $ of order $ s $ on $ \mathbf R ^ {1} $ such that

$$ p ^ {(} s) ( x) = O( | x | ^ {- 1- \delta } ) \ \textrm{ as } x \rightarrow \infty $$

for some $ \delta > 0 $, and let the moments (cf. Moment) $ \alpha _ {k} = {\mathsf E} X ^ {k} $ exist. Further, let a system of rounding-off the results of observations be given (i.e. an origin $ x _ {0} $ and a step $ h $, $ h > 0 $, are given), the choice of which leads to the situation, when instead of the realizations of the initial continuous random variable $ X $, in reality one observes realizations $ x _ {m} = x _ {0} + mh $, $ m = 0, \pm 1 , \pm 2 \dots $ of a discrete random variable

$$ Y = x _ {0} + h \left [ \frac{X- x _ {0} }{h} + \frac{1}{2} \right ] , $$

where $ [ a] $ is the integer part of $ a $. The moments $ a _ {i} = {\mathsf E} Y ^ {i} $, $ i = 1 \dots k $, of $ Y $ are computed from the formula

$$ a _ {i} = \sum _ {m=- \infty } ^ { {+ } \infty } x _ {m} ^ {i} {\mathsf P} \{ Y = x _ {m} \} = $$

$$ = \ \sum _ { m = - \infty } ^ { {+ } \infty } x _ {m} ^ {i} \int\limits _ {x _ {m} - h/2 } ^ { {x _ m } + h/2 } p( x) dx. $$

Generally speaking, $ a _ {i} \neq \alpha _ {i} $. Thus a question arises: Is it possible to adjust the moments $ a _ {1} \dots a _ {k} $ in order to obtain "good" approximations to the moments $ \alpha _ {1} \dots \alpha _ {k} $? The Sheppard corrections give a positive answer to this question.

Let $ g( t) $ be the characteristic function of the random variable $ X $, let $ f( t) $ be the characteristic function of the random variable $ Y $, and let

$$ \phi ( t) = {\mathsf E} e ^ {it \eta } = \frac{2}{th} \sin \frac{th}{2} $$

be the characteristic function of a random variable $ \eta $ which is uniformly distributed on $ [- h/2, h/2] $ and which is stochastically independent of $ X $. Under these conditions, for a small $ h $,

$$ f( t) = g( t) \phi ( t) + O( h ^ {s-} 1 ), $$

hence the moments of the discrete random variable $ Y $ coincide up to $ O( h ^ {s-} 1 ) $ with the moments of the random variable $ X + \eta $ and, thus, up to $ O( h ^ {s-} 1 ) $, the following equalities hold:

$$ \alpha _ {1} = a _ {1} ,\ \ \alpha _ {4} = a _ {4} - \frac{1}{2} a _ {2} h ^ {2} + \frac{7}{240} h ^ {4} , $$

$$ \alpha _ {2} = a _ {2} - \frac{1}{12} h ^ {2} ,\ \alpha _ {5} = a _ {5} - \frac{5}{6} a _ {3} h ^ {2} + \frac{7}{48} a _ {1} h ^ {4} , $$

$$ \alpha _ {3} = a _ {3} - \frac{1}{4} a _ {1} h ^ {2} , $$

$$ \alpha _ {6} = a _ {6} - \frac{5}{4} a _ {4} h ^ {2} + \frac{7}{16} a _ {2} h ^ {4} - \frac{31}{1344} h ^ {6} \dots $$

which contain the so-called Sheppard corrections for the moments $ a _ {1} \dots a _ {k} $.

References

[1] W.F. Sheppard, "On the calculation of the most probable values of frequency-constants, for data arranged according to equidistant divisions of a scale" Proc. Lond. Math. Soc. , 29 (1898) pp. 353–380
[2] H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946)
[3] S.S. Wilks, "Mathematical statistics" , Wiley (1962)
[4] B.L. van der Waerden, "Mathematische Statistik" , Springer (1957)
How to Cite This Entry:
Sheppard corrections. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Sheppard_corrections&oldid=15707
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article