Namespaces
Variants
Actions

Characteristic function

From Encyclopedia of Mathematics
Revision as of 12:52, 4 February 2012 by Boris Tsirelson (talk | contribs) (MSC|60E10 Category:Distribution theory)
Jump to: navigation, search

2020 Mathematics Subject Classification: Primary: 60E10 [MSN][ZBL]

Fourier–Stieltjes transform, of a probability measure

The complex-valued function given on the entire axis by the formula

The characteristic function of a random variable is, by definition, that of its probability distribution

A method connected with the use of characteristic functions was first applied by A.M. Lyapunov and later became one of the basic analytical methods in probability theory. It is used most effectively in proving limit theorems of probability theory. For example, the proof of the central limit theorem for independent identically-distributed random variables with second moments reduces to the elementary relation

Basic properties of characteristic functions.

1) and is positive definite, i.e.

for any finite sets of complex numbers and arguments ;

2) is uniformly continuous on the entire axis ;

3) , , .

4) ; in particular, takes only real values (and is an even function) if and only if the corresponding probability distribution is symmetric, i.e. , where .

5) The characteristic function determines the measure uniquely; the inversion formula

is valid for any interval for which the end points are continuity points of . If is integrable (absolutely if the integral is understood in the sense of Riemann) on , then the corresponding distribution function has a density and

6) The characteristic function of the convolution of two probability measures (of the sum of two independent random variables) is the product of their characteristic functions.

The following three properties express the connection between the existence of moments of a random variable and the order of smoothness of its characteristic function.

7) If for some natural number , then for all natural numbers the derivative of order of the characteristic function of the random variable exists and satisfies the equation

Hence , .

8) If exists, then ;

9) If for all and if

then for all ,

The use of the method of characteristic functions is based mainly on the properties of characteristic functions indicated above and also on the following two theorems.

Bochner's theorem (description of the class of characteristic functions). Suppose that a function is given on and that . For to be the characteristic function of some probability measure it is necessary and sufficient that it be continuous and positive definite.

Lévy's theorem (continuity of the correspondence). Let be a sequence of probability measures and let be the sequence of their characteristic functions. Then weakly converges to some probability measure (that is, for an arbitrary continuous bounded function ) if and only if converges at every point to some continuous function ; in the case of convergence, . This implies that the relative compactness (in the sense of weak convergence) of a family of probability measures is equivalent to the equicontinuity at zero of the family of corresponding characteristic functions.

Bochner's theorem makes it possible to view the Fourier–Stieltjes transform as an isomorphism between the semi-group (under the operation of convolution) of probability measures on and the semi-group (under pointwise multiplication) of positive-definite continuous functions on that have at zero the value one. Lévy's theorem asserts that this algebraic isomorphism is also a topological homeomorphism if in the semi-group of probability measures one has in mind the topology of weak convergence, and in the semi-group of positive-definite functions the topology of uniform convergence on bounded sets.

Expressions are known for the characteristic functions of the basic probability measures (see [1], [2]). For example, the characteristic function of the Gaussian measure with mean and variance is .

For non-negative integer-valued random variables one uses, apart from the characteristic function, also its analogue: the generating function

which is connected with the characteristic function by the relation .

The characteristic function of a probability measure on a finite-dimensional space is defined similarly:

where denotes the scalar product. The facts stated above are also valid for characteristic functions of probability measures on .

References

[1] E. Lukacs, "Characteristic functions" , Griffin (1970)
[2] W. Feller, "An introduction to probability theory and its applications" , 2 , Wiley (1971)
[3] Yu.V. [Yu.V. Prokhorov] Prohorov, Yu.A. Rozanov, "Probability theory, basic concepts. Limit theorems, random processes" , Springer (1969) (Translated from Russian)
[4] V.M. Zolotarev, "One-dimensional stable distributions" , Amer. Math. Soc. (1986) (Translated from Russian)


Comments

References

[a1] M. Loève, "Probability theory" , 1–2 , Springer (1977–1978)
How to Cite This Entry:
Characteristic function. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Characteristic_function&oldid=20847
This article was adapted from an original article by N.N. Vakhania (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article