# Contiguity of probability measures

The concept of contiguity was formally introduced and developed by L. Le Cam in [a7]. It refers to sequences of probability measures, and is meant to be a measure of "closeness" or "nearness" of such sequences (cf. also Probability measure). It may also be viewed as a kind of uniform asymptotic mutual absolute continuity of probability measures. Actually, the need for the introduction of such a concept arose as early as 1955 or 1956, and it was at that time that Le Cam selected the name of "contiguity" , with the help of J.D. Esary (see [a9], p. 29).

There are several equivalent characterizations of contiguity, and the following may serve as its definition. Two sequences and are said to be contiguous if for any for which , it also happens that , and vice versa, where is a sequence of measurable spaces and and are measures on . Here and in the sequel, all limits are taken as . It is worth mentioning at this point that contiguity is transitive: If , are contiguous and , are contiguous, then so are , . Contiguity simplifies many arguments in passing to the limit, and it plays a major role in the asymptotic theory of statistical inference (cf. also Statistical hypotheses, verification of). Thus, contiguity is used in parts of [a8] as a tool of obtaining asymptotic results in an elegant manner; [a9] is a more accessible general reference on contiguity and its usages. In a Markovian framework, contiguity, some related results and selected statistical applications are discussed in [a11]. For illustrative purposes, [a11] can be used as standard reference.

The definition of contiguity calls for its comparison with more familiar modes of "closeness" , such as that based on the (or ) norm, defined by

and also the concept of mutual absolute continuity (cf. also Absolute continuity), . It is always true that convergence in the -norm implies contiguity, but the converse is not true (see, e.g., [a11], p. 12; the special case of Example 3.1(i)). So, contiguity is a weaker measure of "closeness" of two sequences of probability measures than that provided by sup-norm convergence. Also, by means of examples, it may be illustrated that it can happen that for all (i.e., if and only if for all , ) whereas and are not contiguous (see, e.g., [a11], pp. 9–10; Example 2.2). That contiguity need not imply absolute continuity for any is again demonstrated by examples (see, e.g., [a11], p. 9; Example 2.1 and Remark 2.3). This should not come as a surprise, since contiguity is interpreted as asymptotic absolute continuity rather than absolute continuity for any finite . It is to be noted, however, that a pair of contiguous sequences of probability measures can always be replaced by another pair of contiguous sequences whose respective members are mutually absolutely continuous and lie arbitrarily close to the given ones in the sup-norm sense (see, e.g., [a11], p. 25–26; Thm. 5.1).

The concept exactly opposite to contiguity is that of (asymptotic) entire separation. Thus, two sequences and are said to be (asymptotically) entirely separated if there exist and such that whereas as (see [a2], p. 24).

Alternative characterizations of contiguity are provided in [a11], Def. 2.1; Prop. 3.1; Prop. 6.1. In terms of sequences of random variables , two sequences and are contiguous if in -probability implies in -probability, and vice versa (cf. also Random variable). Thus, under contiguity, convergence in probability of sequences of random variables under and are equivalent and the limits are the same. Actually, contiguity of and is determined by the behaviour of the sequences of probability measures and , where , and . As explained above, there is no loss in generality by supposing that and are mutually absolutely continuous for all , and thus the log-likelihood function is well-defined with -probability for all . Then, e.g., and are contiguous if and only if and are relatively compact, or is relatively compact and for every subsequence converging weakly to a probability measure , one has , where is a dummy variable. It should be noted at this point that, under contiguity, the asymptotic distributions, under and , of the likelihood (or log-likelihood) ratios are non-degenerate and distinct. Therefore, the statistical problem of choosing between and is non-trivial for all sufficiently large .

An important consequence of contiguity is the following. With as above, let be a -dimensional random vector such that , a probability measure (where "" stands for weak convergence of probability measures). Then and is determined by . In particular, one may determine the asymptotic distribution of under (the alternative hypothesis) in terms of the asymptotic distribution of under (the null hypothesis) . Typically, and then for some . Also, if it so happens that and in -probability for every in (where denotes transpose and is a positive-definite covariance matrix), then, under contiguity again, .

In the context of parametric models in statistics, contiguity results avail themselves in expanding (in the probability sense) a certain log-likelihood function, in obtaining its asymptotic distribution, in approximating the given family of probability measures by exponential probability measures in the neighbourhood of a parameter point, and in obtaining a convolution representation of the limiting probability measure of the distributions of certain estimates. All these results may then be exploited in deriving asymptotically optimal tests for certain statistical hypotheses testing problems (cf. Statistical hypotheses, verification of), and in studying the asymptotic efficiency (cf. also Efficiency, asymptotic) of estimates. In such a framework, random variables are defined on , is a probability measure defined on and depending on the parameter , an open subset in , is the restriction of to , and the probability measures of interest are usually and , . Under certain regularity conditions, and are contiguous. The log-likelihood function expands in (and -probability); thus:

where is a -dimensional random vector defined in terms of the derivative of an underlying probability density function, and is a covariance function. Furthermore,

In addition, uniformly over bounded sets of , where is the normalized version of , being a suitably truncated version of . Finally, for estimates (of ) for which , a probability measure, one has , for a specified probability measure . This last result is due to J. Hájek [a3] (see also [a6]).

Contiguity of two sequences of probability measures and , as defined above, may be generalized as follows: Replace by , where converges to non-decreasingly, and replace by , where are real numbers tending to non-decreasingly. Then, under suitable regularity conditions, and are contiguous if and only if (see [a1], Thm. 2.1).

Some additional references to contiguity and its statistical applications are [a4], [a5], [a2], [a12], [a10].

#### References

 [a1] M.G. Akritas, M.L. Puri, G.G. Roussas, "Sample size, parameter rates and contiguity: the i.d.d. case" Commun. Statist. Theor. Meth. , A8 : 1 (1979) pp. 71–83 [a2] P.E. Greenwood, A.M. Shiryayey, "Contiguity and the statistical invariance principle" , Gordon&Breach (1985) [a3] J. Hájek, "A characterization of limiting distributions of regular estimates" Z. Wahrscheinlichkeitsth. verw. Gebiete , 14 (1970) pp. 323–330 [a4] J. Hájek, Z. Sidak, "Theory of rank tests" , Acad. Press (1967) [a5] I.A. Ibragimov, R.Z. Has'minskii, "Statistical estimation" , Springer (1981) [a6] N. Inagaki, "On the limiting distribution of a sequence of estimators with uniformity property" Ann. Inst. Statist. Math. , 22 (1970) pp. 1–13 [a7] L. Le Cam, "Locally asymptotically normal families of distributions" Univ. Calif. Publ. in Statist. , 3 (1960) pp. 37–98 [a8] L. Le Cam, "Asymptotic methods in statistical decision theory" , Springer (1986) [a9] L. Le Cam, G.L. Yang, "Asymptotics in statistics: some basic concepts" , Springer (1990) [a10] J. Pfanzagl, "Parametric statistical inference" , W. de Gruyter (1994) [a11] G.G. Roussas, "Contiguity of probability measures: some applications in statistics" , Cambridge Univ. Press (1972) [a12] H. Strasser, "Mathematical theory of statistics" , W. de Gruyter (1985)
How to Cite This Entry:
Contiguity of probability measures. George G. Roussas (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Contiguity_of_probability_measures&oldid=17560
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098