Actions

An approximation of sample quantiles by empirical distribution functions.

Let be a sequence of independent uniform- random variables (cf. also Random variable). Write for the empirical distribution function (cf. Distribution function; Empirical distribution) of the first random variables and denote the uniform empirical process by Let be the left-continuous inverse or quantile function (cf. also Quantile) corresponding to and write for the uniform quantile process. Denote the supremum norm on by . It is easy to show that a.s., implying, e.g., that a.s., . The process was introduced by R.R. Bahadur in [a3] and further investigated by J.C. Kiefer in [a11], [a12]. Therefore this process is called the (uniform) Bahadur–Kiefer process. A final and much more delicate result for is (a1)

see [a7], [a8], [a12], [a13]. From the well-known results for it now immediately follows from (a1), that (a2)

and where is a standard Brownian bridge (cf. Non-parametric methods in statistics). Similar results exist for a single, fixed :  where is standard normal (cf. Normal distribution) and independent of . Extensions of the latter two results to finitely many 's also exist, see [a4], [a5].

Let be a continuous distribution function on , with quantile function , and set , . Then the are independent and distributed according to . Now define to be the empirical distribution function of the first of the and write for the corresponding empirical process. Denote the empirical quantile function by and define the quantile process by , where . The general Bahadur–Kiefer process is now defined as . Since , results for can be obtained when is "close" to . Under natural conditions, see e.g. [a13], results hold which imply that for any  This yields all the above results with replaced with . Observe that (a2) now leads to the following Bahadur representation: If is bounded away from , then uniformly in ,  There are many extensions of the above results, e.g., to various generalizations of quantiles (one- and multi-dimensional) [a1], [a9], to weighted processes [a4], [a7], to single 's converging to [a6], to the two-sample case, to censorship models [a5], to partial-sum processes [a7], to dependent random variables [a2], [a4], [a10], and to regression models [a9].