Namespaces
Variants
Actions

Markov chain, ergodic

From Encyclopedia of Mathematics
Jump to: navigation, search


A homogeneous Markov chain $ \xi ( t) $ with the following property: There are quantities (independent of $ i $)

$$ \tag{1 } p _ {j} = \lim\limits _ {t \rightarrow \infty } p _ {ij} ( t) ,\ \ \sum _ { j } p _ {j} = 1 , $$

where

$$ p _ {ij} ( t) = {\mathsf P} \{ \xi ( t) = j \mid \xi ( 0) = i \} $$

are the transition probabilities. The distribution $ \{ p _ {j} \} $ on the state space of the chain $ \xi ( t) $ is called a stationary distribution: If $ {\mathsf P} \{ \xi ( 0) = j \} = p _ {j} $ for all $ j $, then $ {\mathsf P} \{ \xi ( t) = j \} = p _ {j} $ for all $ j $ and $ t \geq 0 $. A fundamental property of Markov chains,

$$ {\mathsf P} \{ \xi ( t) = j \} = \ \sum _ { i } {\mathsf P} \{ \xi ( 0) = i \} p _ {ij} ( t) , $$

enables one to find the $ \{ p _ {j} \} $ without calculating the limits in (1).

Let

$$ \tau _ {jj} = \min \ \{ {t \geq 1 } : {\xi ( t) = j \mid \xi ( 0) = j } \} $$

be the moment of first return to the state $ j $( for a discrete-time Markov chain), then

$$ {\mathsf E} \tau _ {jj} = p _ {j} ^ {-} 1 . $$

A similar (more complicated) relation holds for a continuous-time Markov chain.

The trajectories of an ergodic Markov chain satisfy the ergodic theorem: If $ f ( \cdot ) $ is a function on the state space of the chain $ \xi ( t) $, then, in the discrete-time case,

$$ {\mathsf P} \left \{ \lim\limits _ {n \rightarrow \infty } \ \frac{1}{n} \sum _ { t= } 0 ^ { n } f ( \xi ( t) ) = \sum _ { i } p _ {j} f ( j) \right \} = 1 , $$

while in the continuous-time case the sum on the left is replaced by an integral. A Markov chain for which there are $ \rho < 1 $ and $ C _ {ij} < \infty $ such that for all $ i , j , t $,

$$ \tag{2 } | p _ {ij} ( t) - p _ {j} | \leq C _ {ij} \rho ^ {t} , $$

is called geometrically ergodic. A sufficient condition for geometric ergodicity of an ergodic Markov chain is the Doeblin condition (see, for example, [1]), which for a discrete (finite or countable) Markov chain may be stated as follows: There are an $ n < \infty $ and a state $ j $ such that $ \inf _ {i} p _ {ij} ( n) = \delta > 0 $. If the Doeblin condition is satisfied, then for the constants in (2) the relation $ \sup _ {i,j} C _ {ij} = C < \infty $ holds.

A necessary and sufficient condition for geometric ergodicity of a countable discrete-time Markov chain is the following (see [3]): There are numbers $ f ( j) $, $ q < 1 $ and a finite set $ B $ of states such that:

$$ {\mathsf E} \{ f ( \xi ( 1) ) \mid \xi ( 0) = i \} \leq q f ( i) ,\ i \notin B , $$

$$ \max _ {i \in B } {\mathsf E} \{ f ( \xi ( 1) ) \mid \xi ( 0) = i \} < \infty . $$

References

[1] J.L. Doob, "Stochastic processes" , Wiley (1953)
[2] K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1967)
[3] N.N. Popov, "Conditions for geometric ergodicity of countable Markov chains" Soviet Math. Dokl. , 18 : 3 (1977) pp. 676–679 Dokl. Akad. Nauk SSSR , 234 : 2 (1977) pp. 316–319

Comments

References

[a1] D. Freedman, "Markov chains" , Holden-Day (1975)
[a2] M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980)
[a3] J.G. Kemeny, J.L. Snell, "Finite Markov chains" , v. Nostrand (1960)
[a4] J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976)
[a5] D. Revuz, "Markov chains" , North-Holland (1975)
[a6] V.I. [V.I. Romanovskii] Romanovsky, "Discrete Markov chains" , Wolters-Noordhoff (1970) (Translated from Russian)
[a7] E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981)
How to Cite This Entry:
Markov chain, ergodic. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Markov_chain,_ergodic&oldid=47767
This article was adapted from an original article by A.M. Zubkov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article