Namespaces
Variants
Actions

Markov chain

From Encyclopedia of Mathematics
Revision as of 17:09, 7 February 2011 by 127.0.0.1 (talk) (Importing text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

A Markov process with finite or countable state space. The theory of Markov chains was created by A.A. Markov who, in 1907, initiated the study of sequences of dependent trials and related sums of random variables [1].

Let the state space be the set of natural numbers or a finite subset thereof. Let be the state of a Markov chain at time . The fundamental property of a Markov chain is the Markov property, which for a discrete-time Markov chain (that is, when takes only non-negative integer values) is defined as follows: For any , any non-negative integers and any natural numbers , the equality

(1)

holds.

The Markov property (1) can be formulated as follows. The time and the related events of the form are called the "present" of the process; events determined by the values , , are called the "past" of the process; events determined by , , are called the "future" of the process. Then (1) is equivalent to the following: For any and fixed "present" , any "past" event and "future" event are conditionally independent, given the present. That is,

In the probabilistic description of a Markov chain the transition probabilities

(2)

play a major role. When (2) does not depend on , the Markov chain is called homogeneous (in time); otherwise it is called non-homogeneous. Only homogeneous Markov chains are considered below. Let

The matrix with entries is called the matrix of transition probabilities. The probability of an arbitrary trajectory , , is given by the transition probabilities and the initial distribution as follows:

Along with the transition probabilities , in a Markov chain the -step transition probabilities are also discussed:

These transition probabilities satisfy the Kolmogorov–Chapman equation

Using transition probabilities it is possible to give the following classification of states. Two states and are called communicating if there are such that and . A state is called inessential if there is a state such that for some and for all . All remaining states are called essential. Thus, the complete state space of a Markov chain decomposes into inessential and essential states. The set of all essential states decomposes into disjoint classes of communicating states, such that any two states from one class communicate with each other, and for any two states from distinct classes, , . A Markov chain for which all states belong to one class of communicating states is called non-decomposable (cf. Markov chain, non-decomposable); otherwise the Markov chain is called decomposable (see Markov chain, decomposable). If the state space is finite, then its decomposition into these classes determines, to a great extent, the asymptotic properties of the Markov chain. For example, for a finite non-decomposable Markov chain the limit

(3)

where , always exists. If, in addition, the Markov chain is aperiodic, that is, if for some , for all and all states and , (see also Markov chain, periodic), then there is the stronger result

(4)

(see also Markov chain, ergodic).

If the state space of a Markov chain is countable, then its asymptotic properties depend on more subtle properties of the classes of communicating states. The series

(5)

simultaneously diverge or converge for all states of a given class. A class of states is called recurrent if for any state of the class the series (5) diverges and non-recurrent if (5) converges. In a recurrent class, with probability 1 a Markov chain returns to any of its states, in a non-recurrent class the probability of recurrence is less than 1. If the mean return time in a recurrent class is finite, then the class is called positive; otherwise it is called zero (see Markov chain, class of positive states of a; Markov chain, class of zero states of a). If and belong to the same class of positive states, then the limit (3) exists, and in the aperiodic case the limit (4) exists. If belongs to a class of zero states or is inessential, then as .

Let be a real-valued function defined on the states of a Markov chain . If the Markov chain is non-decomposable and its states form a positive class, then for the sum

the central limit theorem holds:

(6)

for some and . For (6) to hold it is sufficient to require in addition that , , and .

If takes any value in , then the chain is called a continuous-time Markov chain, defined in a similar way using the Markov property (1). Usually, for a continuous-time Markov chain one additionally requires the existence of finite right derivatives , called the transition probability densities. For a finite continuous-time Markov chain, from the Kolmogorov–Chapman equation one obtains the Kolmogorov differential equations

(7)

and

(8)

with the initial conditions , where is the Kronecker symbol. Under additional assumptions (7) and (8) also hold for countable Markov chains.

If a continuous-time Markov chain has a stationary distribution (that is, the distribution of does not depend on the time ), then satisfies the system of linear equations

(9)

Markov chains have been widely used in the solution of various applied problems. For example, in queueing theory, for the calculation of the distribution of the number of busy lines in an system with refusals (that is, in a system comprising lines with Poisson flow of requests and exponential law of service time) a finite continuous-time Markov chain with states and the following transition probability densities is used:

(here, is the intensity of the Poisson flow of requests and is the mean service time). Using (9) the stationary distribution of the number of busy lines in this case can be determined as:

which is called the Erlang distribution.

See also Markov chain, generalized; Markov chain, recurrent; Absorbing state; Stochastic matrix; Transition with prohibitions.

References

[1] A.A. Markov, Izv. Peterb. Akad. Nauk (6) , 1 : 3 (1907) pp. 61–80
[2] J.L. Doob, "Stochastic processes" , Wiley (1953)
[3] K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1967)
[4] W. Feller, "An introduction to probability theory and its applications" , 1 , Wiley (1966)


Comments

References

[a1] D. Freedman, "Markov chains" , Holden-Day (1975)
[a2] M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980)
[a3] J.G. Kemeny, J.L. Snell, "Finite Markov chains" , v. Nostrand (1960)
[a4] J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976)
[a5] D. Revuz, "Markov chains" , North-Holland (1975)
[a6] V.I. [V.I. Romanovskii] Romanovsky, "Discrete Markov chains" , Wolters-Noordhoff (1970) (Translated from Russian)
[a7] E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981)
How to Cite This Entry:
Markov chain. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Markov_chain&oldid=14607
This article was adapted from an original article by B.A. Sevast'yanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article