Namespaces
Variants
Actions

Difference between revisions of "Bahadur representation"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (AUTOMATIC EDIT (latexlist): Replaced 58 formulas out of 58 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
 
Line 1: Line 1:
 +
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
 +
 +
Out of 58 formulas, 58 were replaced by TEX code.-->
 +
 +
{{TEX|semi-auto}}{{TEX|done}}
 
An approximation of sample quantiles by empirical distribution functions.
 
An approximation of sample quantiles by empirical distribution functions.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b1200201.png" /> be a sequence of independent uniform-<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b1200202.png" /> random variables (cf. also [[Random variable|Random variable]]). Write
+
Let $U _ { 1 } , \dots , U _ { n } , \dots$ be a sequence of independent uniform-$( 0,1 )$ random variables (cf. also [[Random variable|Random variable]]). Write
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b1200203.png" /></td> </tr></table>
+
\begin{equation*} \Gamma _ { n } ( t ) = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } 1 _ { [ 0 , t ] } ( U _ { i } ) \end{equation*}
  
for the empirical distribution function (cf. [[Distribution function|Distribution function]]; [[Empirical distribution|Empirical distribution]]) of the first <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b1200204.png" /> random variables and denote the uniform empirical process by
+
for the empirical distribution function (cf. [[Distribution function|Distribution function]]; [[Empirical distribution|Empirical distribution]]) of the first $n$ random variables and denote the uniform empirical process by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b1200205.png" /></td> </tr></table>
+
\begin{equation*} \alpha _ { n } ( t ) = n ^ { 1 / 2 } ( \Gamma _ { n } ( t ) - t ) , \quad 0 \leq t \leq 1. \end{equation*}
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b1200206.png" /> be the left-continuous inverse or quantile function (cf. also [[Quantile|Quantile]]) corresponding to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b1200207.png" /> and write
+
Let $\Gamma _ { n } ^ { - 1 }$ be the left-continuous inverse or quantile function (cf. also [[Quantile|Quantile]]) corresponding to $\Gamma _ { n }$ and write
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b1200208.png" /></td> </tr></table>
+
\begin{equation*} \beta _ { n } ( t ) = n ^ { 1 / 2 } \left( \Gamma _ { n } ^ { - 1 } ( t ) - t \right) , \quad 0 \leq t \leq 1, \end{equation*}
  
for the uniform quantile process. Denote the supremum norm on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b1200209.png" /> by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002010.png" />. It is easy to show that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002011.png" /> a.s., implying, e.g., that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002012.png" /> a.s., <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002013.png" />. The process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002014.png" /> was introduced by R.R. Bahadur in [[#References|[a3]]] and further investigated by J.C. Kiefer in [[#References|[a11]]], [[#References|[a12]]]. Therefore this process is called the (uniform) Bahadur–Kiefer process. A final and much more delicate result for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002015.png" /> is
+
for the uniform quantile process. Denote the supremum norm on $[0,1]$ by $|.|$. It is easy to show that $\operatorname { lim } _ { n \rightarrow \infty } \| \alpha _ { n } + \beta _ { n } \| = 0$ a.s., implying, e.g., that $\Gamma _ { n } ^ { - 1 } ( t ) = 2 t - \Gamma _ { n } ( t ) + o \left( n ^ { - 1 / 2 } \right)$ a.s., $0 \leq t \leq 1$. The process $\alpha _ { n } + \beta _ { n }$ was introduced by R.R. Bahadur in [[#References|[a3]]] and further investigated by J.C. Kiefer in [[#References|[a11]]], [[#References|[a12]]]. Therefore this process is called the (uniform) Bahadur–Kiefer process. A final and much more delicate result for $\| \alpha _ { n } + \beta _ { n } \|$ is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002016.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
\begin{equation} \tag{a1} \operatorname { lim } _ { n \rightarrow \infty } \frac { n ^ { 1 / 4 } } { ( \operatorname { log } n ) ^ { 1 / 2 } } \frac { \| \alpha _ { n } + \beta _ { n } \| } { \| \alpha _ { n } \| ^ { 1 / 2 } } = 1 \text{ a.s.}, \end{equation}
  
see [[#References|[a7]]], [[#References|[a8]]], [[#References|[a12]]], [[#References|[a13]]]. From the well-known results for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002017.png" /> it now immediately follows from (a1), that
+
see [[#References|[a7]]], [[#References|[a8]]], [[#References|[a12]]], [[#References|[a13]]]. From the well-known results for $\alpha _ { n }$ it now immediately follows from (a1), that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002018.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
\begin{equation} \tag{a2} \limsup _ { n \rightarrow \infty } \frac { n ^ { 1 / 4 } } { ( \operatorname { log } n ) ^ { 1 / 2 } ( \operatorname { log } \operatorname { log } n ) ^ { 1 / 4 } } \| \alpha _ { n } + \beta _ { n } \| = 2 ^ { - 1 / 4 } \text{ a.s.} \end{equation}
  
 
and
 
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002019.png" /></td> </tr></table>
+
\begin{equation*} \frac { n ^ { 1 / 4 } } { ( \operatorname { log } n ) ^ { 1 / 2 } } \| \alpha _ { n } + \beta _ { n } \| \stackrel { d } { \rightarrow } \| B \| ^ { 1 / 2 }, \end{equation*}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002020.png" /> is a standard Brownian bridge (cf. [[Non-parametric methods in statistics|Non-parametric methods in statistics]]). Similar results exist for a single, fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002021.png" />:
+
where $B$ is a standard Brownian bridge (cf. [[Non-parametric methods in statistics|Non-parametric methods in statistics]]). Similar results exist for a single, fixed $t \in ( 0,1 )$:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002022.png" /></td> </tr></table>
+
\begin{equation*} \operatorname { limsup } _ { n \rightarrow \infty } \pm \frac { n ^ { 1 / 4 } } { ( \operatorname { log } \operatorname { log } n ) ^ { 3 / 4 } } ( \alpha _ { n } ( t ) + \beta _ { n } ( t ) ) = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002023.png" /></td> </tr></table>
+
\begin{equation*} = 2 ^ { 5 / 4 } 3 ^ { - 3 / 4 } ( t ( 1 - t ) ) ^ { 1 / 4 } \text { a.s., } n ^ { 1 / 4 } ( \alpha _ { n } ( t ) + \beta _ { n } ( t ) ) \stackrel { d } { \rightarrow } Z [ B ( t ) ] ^ { 1 / 2 }, \end{equation*}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002024.png" /> is standard normal (cf. [[Normal distribution|Normal distribution]]) and independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002025.png" />. Extensions of the latter two results to finitely many <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002026.png" />'s also exist, see [[#References|[a4]]], [[#References|[a5]]].
+
where $Z$ is standard normal (cf. [[Normal distribution|Normal distribution]]) and independent of $B$. Extensions of the latter two results to finitely many $t$'s also exist, see [[#References|[a4]]], [[#References|[a5]]].
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002027.png" /> be a continuous distribution function on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002028.png" />, with quantile function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002029.png" />, and set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002030.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002031.png" />. Then the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002032.png" /> are independent and distributed according to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002033.png" />. Now define <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002034.png" /> to be the empirical distribution function of the first <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002035.png" /> of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002036.png" /> and write <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002037.png" /> for the corresponding empirical process. Denote the empirical quantile function by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002038.png" /> and define the quantile process by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002039.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002040.png" />. The general Bahadur–Kiefer process is now defined as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002041.png" />. Since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002042.png" />, results for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002043.png" /> can be obtained when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002044.png" /> is  "close"  to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002045.png" />. Under natural conditions, see e.g. [[#References|[a13]]], results hold which imply that for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002046.png" />
+
Let $F$ be a continuous distribution function on $\mathbf{R}$, with quantile function $Q$, and set $X _ { i } = Q ( U _ { i } )$, $i = 1,2 , \dots$. Then the $X_i$ are independent and distributed according to $F$. Now define $F _ { n }$ to be the empirical distribution function of the first $n$ of the $X_i$ and write $\alpha_{n, F} = n ^ { 1 / 2 } ( F _ { n } - F )$ for the corresponding empirical process. Denote the empirical quantile function by $Q _ { n }$ and define the quantile process by $\beta _ { n , F } = f \circ Q n ^ { 1 / 2 } ( Q _ { n } - Q )$, where $f = F ^ { \prime }$. The general Bahadur–Kiefer process is now defined as $\alpha _ { n  , F} \circ Q + \beta _ { n , F }$. Since $\alpha _ { n ,F}  \circ Q \equiv \alpha _ { n }$, results for $\alpha _ { n  , F} \circ Q + \beta _ { n , F }$ can be obtained when $\beta _ { n , F }$ is  "close"  to $\beta _ { n }$. Under natural conditions, see e.g. [[#References|[a13]]], results hold which imply that for any $\varepsilon &gt; 0$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002047.png" /></td> </tr></table>
+
\begin{equation*} \| \beta _ { n , F } - \beta _ { n } \| = o \left( \frac { 1 } { n ^ { 1 / 2 - \varepsilon } } \right) \ \text{a.s.}\ . \end{equation*}
  
This yields all the above results with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002048.png" /> replaced with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002049.png" />. Observe that (a2) now leads to the following Bahadur representation: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002050.png" /> is bounded away from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002051.png" />, then uniformly in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002052.png" />,
+
This yields all the above results with $\beta _ { n }$ replaced with $\beta _ { n , F }$. Observe that (a2) now leads to the following Bahadur representation: If $f$ is bounded away from $0$, then uniformly in $t \in ( 0,1 )$,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002053.png" /></td> </tr></table>
+
\begin{equation*} Q _ { n } ( t ) = Q ( t ) + \frac { t - F _ { n } ( Q ( t ) ) } { f ( Q ( t ) ) } + \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002054.png" /></td> </tr></table>
+
\begin{equation*} + O \left( \frac { ( \operatorname { log } n ) ^ { 1 / 2 } ( \operatorname { log } \operatorname { log } n ) ^ { 1 / 4 } } { n ^ { 3 / 4 } } \right) \text{ a.s..} \end{equation*}
  
There are many extensions of the above results, e.g., to various generalizations of quantiles (one- and multi-dimensional) [[#References|[a1]]], [[#References|[a9]]], to weighted processes [[#References|[a4]]], [[#References|[a7]]], to single <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002055.png" />'s converging to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002056.png" /> [[#References|[a6]]], to the two-sample case, to censorship models [[#References|[a5]]], to partial-sum processes [[#References|[a7]]], to dependent random variables [[#References|[a2]]], [[#References|[a4]]], [[#References|[a10]]], and to regression models [[#References|[a9]]].
+
There are many extensions of the above results, e.g., to various generalizations of quantiles (one- and multi-dimensional) [[#References|[a1]]], [[#References|[a9]]], to weighted processes [[#References|[a4]]], [[#References|[a7]]], to single $t _ { n }$'s converging to $0$ [[#References|[a6]]], to the two-sample case, to censorship models [[#References|[a5]]], to partial-sum processes [[#References|[a7]]], to dependent random variables [[#References|[a2]]], [[#References|[a4]]], [[#References|[a10]]], and to regression models [[#References|[a9]]].
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  M.A. Arcones,  "The Bahadur–Kiefer representation of the two-dimensional spatial medians"  ''Ann. Inst. Statist. Math.'' , '''50'''  (1998)  pp. 71–86</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  M.A. Arcones,  "The Bahadur–Kiefer representation for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002057.png" />-quantiles"  ''Ann. Statist.'' , '''24'''  (1996)  pp. 1400–1422</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  R.R. Bahadur,  "A note on quantiles in large samples"  ''Ann. Math. Stat.'' , '''37'''  (1966)  pp. 577–580</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  J. Beirlant,  P. Deheuvels,  J.H.J. Einmahl,  D.M. Mason,  "Bahadur–Kiefer theorems for uniform spacings processes"  ''Theory Probab. Appl.'' , '''36'''  (1992)  pp. 647–669</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  J. Beirlant,  J.H.J. Einmahl,  "Bahadur–Kiefer theorems for the product-limit process"  ''J. Multivariate Anal.'' , '''35'''  (1990)  pp. 276–294</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  P. Deheuvels,  "Pointwise Bahadur–Kiefer-type theorems II" , ''Nonparametric statistics and related topics (Ottawa, 1991)'' , North-Holland  (1992)  pp. 331–345</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  P. Deheuvels,  D.M. Mason,  "Bahadur–Kiefer-type processes"  ''Ann. of Probab.'' , '''18'''  (1990)  pp. 669–697</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top">  J.H.J. Einmahl,  "A short and elementary proof of the main Bahadur–Kiefer theorem"  ''Ann. of Probab.'' , '''24'''  (1996)  pp. 526–531</TD></TR><TR><TD valign="top">[a9]</TD> <TD valign="top">  X. He,  Q.-M. Shao,  "A general Bahadur representation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b120/b120020/b12002058.png" />-estimators and its application to linear regression with nonstochastic designs"  ''Ann. Statist.'' , '''24'''  (1996)  pp. 2608–2630</TD></TR><TR><TD valign="top">[a10]</TD> <TD valign="top">  C.H. Hesse,  "A Bahadur–Kiefer type representation for a large class of stationary, possibly infinite variance, linear processes"  ''Ann. Statist.'' , '''18'''  (1990)  pp. 1188–1202</TD></TR><TR><TD valign="top">[a11]</TD> <TD valign="top">  J.C. Kiefer,  "On Bahadur's representation of sample quantiles"  ''Ann. Math. Stat.'' , '''38'''  (1967)  pp. 1323–1342</TD></TR><TR><TD valign="top">[a12]</TD> <TD valign="top">  J.C. Kiefer,  "Deviations between the sample quantile process and the sample df"  M. Puri (ed.) , ''Non-parametric Techniques in Statistical Inference'' , Cambridge Univ. Press  (1970)  pp. 299–319</TD></TR><TR><TD valign="top">[a13]</TD> <TD valign="top">  G.R. Shorack,  J.A. Wellner,  "Empirical processes with applications to statistics" , Wiley  (1986)</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  M.A. Arcones,  "The Bahadur–Kiefer representation of the two-dimensional spatial medians"  ''Ann. Inst. Statist. Math.'' , '''50'''  (1998)  pp. 71–86</td></tr><tr><td valign="top">[a2]</td> <td valign="top">  M.A. Arcones,  "The Bahadur–Kiefer representation for $U$-quantiles"  ''Ann. Statist.'' , '''24'''  (1996)  pp. 1400–1422</td></tr><tr><td valign="top">[a3]</td> <td valign="top">  R.R. Bahadur,  "A note on quantiles in large samples"  ''Ann. Math. Stat.'' , '''37'''  (1966)  pp. 577–580</td></tr><tr><td valign="top">[a4]</td> <td valign="top">  J. Beirlant,  P. Deheuvels,  J.H.J. Einmahl,  D.M. Mason,  "Bahadur–Kiefer theorems for uniform spacings processes"  ''Theory Probab. Appl.'' , '''36'''  (1992)  pp. 647–669</td></tr><tr><td valign="top">[a5]</td> <td valign="top">  J. Beirlant,  J.H.J. Einmahl,  "Bahadur–Kiefer theorems for the product-limit process"  ''J. Multivariate Anal.'' , '''35'''  (1990)  pp. 276–294</td></tr><tr><td valign="top">[a6]</td> <td valign="top">  P. Deheuvels,  "Pointwise Bahadur–Kiefer-type theorems II" , ''Nonparametric statistics and related topics (Ottawa, 1991)'' , North-Holland  (1992)  pp. 331–345</td></tr><tr><td valign="top">[a7]</td> <td valign="top">  P. Deheuvels,  D.M. Mason,  "Bahadur–Kiefer-type processes"  ''Ann. of Probab.'' , '''18'''  (1990)  pp. 669–697</td></tr><tr><td valign="top">[a8]</td> <td valign="top">  J.H.J. Einmahl,  "A short and elementary proof of the main Bahadur–Kiefer theorem"  ''Ann. of Probab.'' , '''24'''  (1996)  pp. 526–531</td></tr><tr><td valign="top">[a9]</td> <td valign="top">  X. He,  Q.-M. Shao,  "A general Bahadur representation of $M$-estimators and its application to linear regression with nonstochastic designs"  ''Ann. Statist.'' , '''24'''  (1996)  pp. 2608–2630</td></tr><tr><td valign="top">[a10]</td> <td valign="top">  C.H. Hesse,  "A Bahadur–Kiefer type representation for a large class of stationary, possibly infinite variance, linear processes"  ''Ann. Statist.'' , '''18'''  (1990)  pp. 1188–1202</td></tr><tr><td valign="top">[a11]</td> <td valign="top">  J.C. Kiefer,  "On Bahadur's representation of sample quantiles"  ''Ann. Math. Stat.'' , '''38'''  (1967)  pp. 1323–1342</td></tr><tr><td valign="top">[a12]</td> <td valign="top">  J.C. Kiefer,  "Deviations between the sample quantile process and the sample df"  M. Puri (ed.) , ''Non-parametric Techniques in Statistical Inference'' , Cambridge Univ. Press  (1970)  pp. 299–319</td></tr><tr><td valign="top">[a13]</td> <td valign="top">  G.R. Shorack,  J.A. Wellner,  "Empirical processes with applications to statistics" , Wiley  (1986)</td></tr></table>

Latest revision as of 16:45, 1 July 2020

An approximation of sample quantiles by empirical distribution functions.

Let $U _ { 1 } , \dots , U _ { n } , \dots$ be a sequence of independent uniform-$( 0,1 )$ random variables (cf. also Random variable). Write

\begin{equation*} \Gamma _ { n } ( t ) = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } 1 _ { [ 0 , t ] } ( U _ { i } ) \end{equation*}

for the empirical distribution function (cf. Distribution function; Empirical distribution) of the first $n$ random variables and denote the uniform empirical process by

\begin{equation*} \alpha _ { n } ( t ) = n ^ { 1 / 2 } ( \Gamma _ { n } ( t ) - t ) , \quad 0 \leq t \leq 1. \end{equation*}

Let $\Gamma _ { n } ^ { - 1 }$ be the left-continuous inverse or quantile function (cf. also Quantile) corresponding to $\Gamma _ { n }$ and write

\begin{equation*} \beta _ { n } ( t ) = n ^ { 1 / 2 } \left( \Gamma _ { n } ^ { - 1 } ( t ) - t \right) , \quad 0 \leq t \leq 1, \end{equation*}

for the uniform quantile process. Denote the supremum norm on $[0,1]$ by $|.|$. It is easy to show that $\operatorname { lim } _ { n \rightarrow \infty } \| \alpha _ { n } + \beta _ { n } \| = 0$ a.s., implying, e.g., that $\Gamma _ { n } ^ { - 1 } ( t ) = 2 t - \Gamma _ { n } ( t ) + o \left( n ^ { - 1 / 2 } \right)$ a.s., $0 \leq t \leq 1$. The process $\alpha _ { n } + \beta _ { n }$ was introduced by R.R. Bahadur in [a3] and further investigated by J.C. Kiefer in [a11], [a12]. Therefore this process is called the (uniform) Bahadur–Kiefer process. A final and much more delicate result for $\| \alpha _ { n } + \beta _ { n } \|$ is

\begin{equation} \tag{a1} \operatorname { lim } _ { n \rightarrow \infty } \frac { n ^ { 1 / 4 } } { ( \operatorname { log } n ) ^ { 1 / 2 } } \frac { \| \alpha _ { n } + \beta _ { n } \| } { \| \alpha _ { n } \| ^ { 1 / 2 } } = 1 \text{ a.s.}, \end{equation}

see [a7], [a8], [a12], [a13]. From the well-known results for $\alpha _ { n }$ it now immediately follows from (a1), that

\begin{equation} \tag{a2} \limsup _ { n \rightarrow \infty } \frac { n ^ { 1 / 4 } } { ( \operatorname { log } n ) ^ { 1 / 2 } ( \operatorname { log } \operatorname { log } n ) ^ { 1 / 4 } } \| \alpha _ { n } + \beta _ { n } \| = 2 ^ { - 1 / 4 } \text{ a.s.} \end{equation}

and

\begin{equation*} \frac { n ^ { 1 / 4 } } { ( \operatorname { log } n ) ^ { 1 / 2 } } \| \alpha _ { n } + \beta _ { n } \| \stackrel { d } { \rightarrow } \| B \| ^ { 1 / 2 }, \end{equation*}

where $B$ is a standard Brownian bridge (cf. Non-parametric methods in statistics). Similar results exist for a single, fixed $t \in ( 0,1 )$:

\begin{equation*} \operatorname { limsup } _ { n \rightarrow \infty } \pm \frac { n ^ { 1 / 4 } } { ( \operatorname { log } \operatorname { log } n ) ^ { 3 / 4 } } ( \alpha _ { n } ( t ) + \beta _ { n } ( t ) ) = \end{equation*}

\begin{equation*} = 2 ^ { 5 / 4 } 3 ^ { - 3 / 4 } ( t ( 1 - t ) ) ^ { 1 / 4 } \text { a.s., } n ^ { 1 / 4 } ( \alpha _ { n } ( t ) + \beta _ { n } ( t ) ) \stackrel { d } { \rightarrow } Z [ B ( t ) ] ^ { 1 / 2 }, \end{equation*}

where $Z$ is standard normal (cf. Normal distribution) and independent of $B$. Extensions of the latter two results to finitely many $t$'s also exist, see [a4], [a5].

Let $F$ be a continuous distribution function on $\mathbf{R}$, with quantile function $Q$, and set $X _ { i } = Q ( U _ { i } )$, $i = 1,2 , \dots$. Then the $X_i$ are independent and distributed according to $F$. Now define $F _ { n }$ to be the empirical distribution function of the first $n$ of the $X_i$ and write $\alpha_{n, F} = n ^ { 1 / 2 } ( F _ { n } - F )$ for the corresponding empirical process. Denote the empirical quantile function by $Q _ { n }$ and define the quantile process by $\beta _ { n , F } = f \circ Q n ^ { 1 / 2 } ( Q _ { n } - Q )$, where $f = F ^ { \prime }$. The general Bahadur–Kiefer process is now defined as $\alpha _ { n , F} \circ Q + \beta _ { n , F }$. Since $\alpha _ { n ,F} \circ Q \equiv \alpha _ { n }$, results for $\alpha _ { n , F} \circ Q + \beta _ { n , F }$ can be obtained when $\beta _ { n , F }$ is "close" to $\beta _ { n }$. Under natural conditions, see e.g. [a13], results hold which imply that for any $\varepsilon > 0$

\begin{equation*} \| \beta _ { n , F } - \beta _ { n } \| = o \left( \frac { 1 } { n ^ { 1 / 2 - \varepsilon } } \right) \ \text{a.s.}\ . \end{equation*}

This yields all the above results with $\beta _ { n }$ replaced with $\beta _ { n , F }$. Observe that (a2) now leads to the following Bahadur representation: If $f$ is bounded away from $0$, then uniformly in $t \in ( 0,1 )$,

\begin{equation*} Q _ { n } ( t ) = Q ( t ) + \frac { t - F _ { n } ( Q ( t ) ) } { f ( Q ( t ) ) } + \end{equation*}

\begin{equation*} + O \left( \frac { ( \operatorname { log } n ) ^ { 1 / 2 } ( \operatorname { log } \operatorname { log } n ) ^ { 1 / 4 } } { n ^ { 3 / 4 } } \right) \text{ a.s..} \end{equation*}

There are many extensions of the above results, e.g., to various generalizations of quantiles (one- and multi-dimensional) [a1], [a9], to weighted processes [a4], [a7], to single $t _ { n }$'s converging to $0$ [a6], to the two-sample case, to censorship models [a5], to partial-sum processes [a7], to dependent random variables [a2], [a4], [a10], and to regression models [a9].

References

[a1] M.A. Arcones, "The Bahadur–Kiefer representation of the two-dimensional spatial medians" Ann. Inst. Statist. Math. , 50 (1998) pp. 71–86
[a2] M.A. Arcones, "The Bahadur–Kiefer representation for $U$-quantiles" Ann. Statist. , 24 (1996) pp. 1400–1422
[a3] R.R. Bahadur, "A note on quantiles in large samples" Ann. Math. Stat. , 37 (1966) pp. 577–580
[a4] J. Beirlant, P. Deheuvels, J.H.J. Einmahl, D.M. Mason, "Bahadur–Kiefer theorems for uniform spacings processes" Theory Probab. Appl. , 36 (1992) pp. 647–669
[a5] J. Beirlant, J.H.J. Einmahl, "Bahadur–Kiefer theorems for the product-limit process" J. Multivariate Anal. , 35 (1990) pp. 276–294
[a6] P. Deheuvels, "Pointwise Bahadur–Kiefer-type theorems II" , Nonparametric statistics and related topics (Ottawa, 1991) , North-Holland (1992) pp. 331–345
[a7] P. Deheuvels, D.M. Mason, "Bahadur–Kiefer-type processes" Ann. of Probab. , 18 (1990) pp. 669–697
[a8] J.H.J. Einmahl, "A short and elementary proof of the main Bahadur–Kiefer theorem" Ann. of Probab. , 24 (1996) pp. 526–531
[a9] X. He, Q.-M. Shao, "A general Bahadur representation of $M$-estimators and its application to linear regression with nonstochastic designs" Ann. Statist. , 24 (1996) pp. 2608–2630
[a10] C.H. Hesse, "A Bahadur–Kiefer type representation for a large class of stationary, possibly infinite variance, linear processes" Ann. Statist. , 18 (1990) pp. 1188–1202
[a11] J.C. Kiefer, "On Bahadur's representation of sample quantiles" Ann. Math. Stat. , 38 (1967) pp. 1323–1342
[a12] J.C. Kiefer, "Deviations between the sample quantile process and the sample df" M. Puri (ed.) , Non-parametric Techniques in Statistical Inference , Cambridge Univ. Press (1970) pp. 299–319
[a13] G.R. Shorack, J.A. Wellner, "Empirical processes with applications to statistics" , Wiley (1986)
How to Cite This Entry:
Bahadur representation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bahadur_representation&oldid=13619
This article was adapted from an original article by J.H.J. Einmahl (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article