Namespaces
Variants
Actions

Difference between revisions of "Stationary distribution"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (MR/ZBL numbers added)
m (tex encoded by computer)
 
(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
A probability distribution for a homogeneous [[Markov chain|Markov chain]] that is independent of time. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s0872601.png" /> be a homogeneous Markov chain with set of states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s0872602.png" /> and transition probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s0872603.png" />. A stationary distribution is a set of numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s0872604.png" /> such that
+
<!--
 +
s0872601.png
 +
$#A+1 = 45 n = 0
 +
$#C+1 = 45 : ~/encyclopedia/old_files/data/S087/S.0807260 Stationary distribution
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s0872605.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s0872606.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
{{MSC|60J10|60J27}}
  
The equalities (2) signify that a stationary distribution is invariant in time: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s0872607.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s0872608.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s0872609.png" /> for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726010.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726011.png" />; moreover, for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726012.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726013.png" />,
+
[[Category:Markov processes]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726014.png" /></td> </tr></table>
+
A probability distribution for a homogeneous [[Markov chain|Markov chain]] that is independent of time. Let  $  \xi ( t) $
 +
be a homogeneous Markov chain with set of states  $  S $
 +
and transition probabilities  $  p _ {ij} ( t) = {\mathsf P} \{ \xi ( t) = j \mid  \xi ( 0) = i \} $.
 +
A stationary distribution is a set of numbers  $  \{ {\pi _ {j} } : {j \in S } \} $
 +
such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726015.png" /></td> </tr></table>
+
$$ \tag{1 }
 +
\pi _ {j}  \geq  0 ,\  \sum _ {j \in S } \pi _ {j}  = 1,
 +
$$
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726016.png" /> is a state of the Markov chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726017.png" /> for which the limits
+
$$ \tag{2 }
 +
\sum _ {i \in S } \pi _ {i} p _ {ij} ( t)  = \pi _ {j} ,\  j \in S ,\  t > 0.
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726018.png" /></td> </tr></table>
+
The equalities (2) signify that a stationary distribution is invariant in time: If  $  {\mathsf P} \{ \xi ( 0) = i \} = \pi _ {i} $,
 +
$  i \in S $,
 +
then  $  {\mathsf P} \{ \xi ( t) = i \} = \pi _ {i} $
 +
for any  $  i \in S $,
 +
$  t > 0 $;  
 +
moreover, for any  $  t, t _ {1} \dots t _ {k} > 0 $,
 +
$  i _ {1} \dots i _ {k} \in S $,
  
exist, then the set of numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726019.png" /> satisfies (2) and is a stationary distribution of the chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726020.png" /> (see also [[Transition probabilities|Transition probabilities]]).
+
$$
 +
{\mathsf P} \{ \xi ( t _ {1} + t) = i _ {1} \dots \xi ( t _ {k} + t) = i _ {k} \} =
 +
$$
  
The system of linear equations (2) relative to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726021.png" />, given the supplementary conditions (1), has a unique solution if the number of classes of positive states of the Markov chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726022.png" /> is equal to 1; if the chain has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726023.png" /> classes of positive states, then the set of its stationary distributions is the convex hull of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726024.png" /> stationary distributions, each of which is concentrated on one class (see [[Markov chain, class of positive states of a|Markov chain, class of positive states of a]]).
+
$$
 +
= \
 +
{\mathsf P} \{ \xi ( t _ {1} ) = i _ {1} \dots \xi ( t _ {k} ) = i _ {k} \} .
 +
$$
  
Any non-negative solution of the system (2) is called a stationary measure; a stationary measure can exist also when (1) and (2) are not compatible. For example, a random walk on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726025.png" />:
+
If  $  i \in S $
 +
is a state of the Markov chain  $  \xi ( t) $
 +
for which the limits
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726026.png" /></td> </tr></table>
+
$$
 +
\lim\limits _ {t \rightarrow \infty }  p _ {ij} ( t)  = \pi _ {j} ( i)  \geq  0,\ \
 +
j \in S ,\ \
 +
\sum _ {j \in S } \pi _ {j} ( i)  = 1 ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726027.png" /> are independent random variables such that
+
exist, then the set of numbers  $  \{ {\pi _ {j} ( i) } : {j \in S } \} $
 +
satisfies (2) and is a stationary distribution of the chain  $  \xi ( t) $(
 +
see also [[Transition probabilities|Transition probabilities]]).
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726028.png" /></td> </tr></table>
+
The system of linear equations (2) relative to  $  \{ \pi _ {j} \} $,
 +
given the supplementary conditions (1), has a unique solution if the number of classes of positive states of the Markov chain  $  \xi ( t) $
 +
is equal to 1; if the chain has  $  k $
 +
classes of positive states, then the set of its stationary distributions is the convex hull of  $  k $
 +
stationary distributions, each of which is concentrated on one class (see [[Markov chain, class of positive states of a|Markov chain, class of positive states of a]]).
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726029.png" /></td> </tr></table>
+
Any non-negative solution of the system (2) is called a stationary measure; a stationary measure can exist also when (1) and (2) are not compatible. For example, a random walk on  $  \{ 0, 1 ,\dots \} $:
 +
 
 +
$$
 +
\xi ( 0)  =  0,\ \
 +
\xi ( t)  =  \xi ( t- 1) + \eta ( t),\ \
 +
t = 1, 2 \dots
 +
$$
 +
 
 +
where  $  \eta ( 1) , \eta ( 2) \dots $
 +
are independent random variables such that
 +
 
 +
$$
 +
{\mathsf P} \{ \eta ( i) = 1 \}  = p,\ \
 +
{\mathsf P} \{ \eta ( i) = - 1 \}  = 1- p,\ \
 +
0 < p < 1,
 +
$$
 +
 
 +
$$
 +
i  =  1, 2 \dots
 +
$$
  
 
does not have a stationary distribution, but has a stationary measure:
 
does not have a stationary distribution, but has a stationary measure:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726030.png" /></td> </tr></table>
+
$$
 +
\pi _ {j}  = \left (
 +
\frac{p}{1-}
 +
p \right )  ^ {j} ,\ \
 +
j = 0, \pm  1 ,\dots .
 +
$$
  
One of the possible probabilistic interpretations of a stationary measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726031.png" /> of a Markov chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726032.png" /> with set of states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726033.png" /> is as follows. Let there be a countable set of independent realizations of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726034.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726035.png" /> be the number of realizations for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726036.png" />. If the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726037.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726038.png" />, are independent and are subject to Poisson distributions with respective means <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726039.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726040.png" />, then for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726041.png" /> the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726042.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726043.png" />, are independent and have the same distributions as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726044.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087260/s08726045.png" />.
+
One of the possible probabilistic interpretations of a stationary measure $  \{ \pi _ {j} \} $
 +
of a Markov chain $  \xi ( t) $
 +
with set of states $  S $
 +
is as follows. Let there be a countable set of independent realizations of $  \xi ( t) $,  
 +
and let $  \eta _ {t} ( i) $
 +
be the number of realizations for which $  \xi ( t) = i $.  
 +
If the random variables $  \eta _ {0} ( i) $,  
 +
$  i \in S $,  
 +
are independent and are subject to Poisson distributions with respective means $  \pi _ {i} $,  
 +
$  i \in S $,  
 +
then for any $  t > 0 $
 +
the random variables $  \eta _ {t} ( i) $,  
 +
$  i \in S $,  
 +
are independent and have the same distributions as $  \eta _ {0} ( i) $,  
 +
$  i \in S $.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1960) {{MR|0116388}} {{ZBL|0092.34304}} </TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> S. Karlin, "A first course in stochastic processes" , Acad. Press (1966) {{MR|0208657}} {{ZBL|0315.60016}} {{ZBL|0226.60052}} {{ZBL|0177.21102}} </TD></TR></table>
+
{|
 
+
|valign="top"|{{Ref|C}}|| K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1960) {{MR|0116388}} {{ZBL|0092.34304}}
 
+
|-
 +
|valign="top"|{{Ref|K}}|| S. Karlin, "A first course in stochastic processes" , Acad. Press (1966) {{MR|0208657}} {{ZBL|0315.60016}} {{ZBL|0226.60052}} {{ZBL|0177.21102}}
 +
|}
  
 
====Comments====
 
====Comments====
Stationary distributions are also defined for more general Markov processes, see e.g. [[#References|[a1]]].
+
Stationary distributions are also defined for more general Markov processes, see e.g. {{Cite|B}}.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> L.P. Breiman, "Probability" , Addison-Wesley (1968) {{MR|0229267}} {{ZBL|0174.48801}} </TD></TR></table>
+
{|
 +
|valign="top"|{{Ref|B}}|| L.P. Breiman, "Probability" , Addison-Wesley (1968) {{MR|0229267}} {{ZBL|0174.48801}}
 +
|}

Latest revision as of 08:23, 6 June 2020


2020 Mathematics Subject Classification: Primary: 60J10 Secondary: 60J27 [MSN][ZBL]

A probability distribution for a homogeneous Markov chain that is independent of time. Let $ \xi ( t) $ be a homogeneous Markov chain with set of states $ S $ and transition probabilities $ p _ {ij} ( t) = {\mathsf P} \{ \xi ( t) = j \mid \xi ( 0) = i \} $. A stationary distribution is a set of numbers $ \{ {\pi _ {j} } : {j \in S } \} $ such that

$$ \tag{1 } \pi _ {j} \geq 0 ,\ \sum _ {j \in S } \pi _ {j} = 1, $$

$$ \tag{2 } \sum _ {i \in S } \pi _ {i} p _ {ij} ( t) = \pi _ {j} ,\ j \in S ,\ t > 0. $$

The equalities (2) signify that a stationary distribution is invariant in time: If $ {\mathsf P} \{ \xi ( 0) = i \} = \pi _ {i} $, $ i \in S $, then $ {\mathsf P} \{ \xi ( t) = i \} = \pi _ {i} $ for any $ i \in S $, $ t > 0 $; moreover, for any $ t, t _ {1} \dots t _ {k} > 0 $, $ i _ {1} \dots i _ {k} \in S $,

$$ {\mathsf P} \{ \xi ( t _ {1} + t) = i _ {1} \dots \xi ( t _ {k} + t) = i _ {k} \} = $$

$$ = \ {\mathsf P} \{ \xi ( t _ {1} ) = i _ {1} \dots \xi ( t _ {k} ) = i _ {k} \} . $$

If $ i \in S $ is a state of the Markov chain $ \xi ( t) $ for which the limits

$$ \lim\limits _ {t \rightarrow \infty } p _ {ij} ( t) = \pi _ {j} ( i) \geq 0,\ \ j \in S ,\ \ \sum _ {j \in S } \pi _ {j} ( i) = 1 , $$

exist, then the set of numbers $ \{ {\pi _ {j} ( i) } : {j \in S } \} $ satisfies (2) and is a stationary distribution of the chain $ \xi ( t) $( see also Transition probabilities).

The system of linear equations (2) relative to $ \{ \pi _ {j} \} $, given the supplementary conditions (1), has a unique solution if the number of classes of positive states of the Markov chain $ \xi ( t) $ is equal to 1; if the chain has $ k $ classes of positive states, then the set of its stationary distributions is the convex hull of $ k $ stationary distributions, each of which is concentrated on one class (see Markov chain, class of positive states of a).

Any non-negative solution of the system (2) is called a stationary measure; a stationary measure can exist also when (1) and (2) are not compatible. For example, a random walk on $ \{ 0, 1 ,\dots \} $:

$$ \xi ( 0) = 0,\ \ \xi ( t) = \xi ( t- 1) + \eta ( t),\ \ t = 1, 2 \dots $$

where $ \eta ( 1) , \eta ( 2) \dots $ are independent random variables such that

$$ {\mathsf P} \{ \eta ( i) = 1 \} = p,\ \ {\mathsf P} \{ \eta ( i) = - 1 \} = 1- p,\ \ 0 < p < 1, $$

$$ i = 1, 2 \dots $$

does not have a stationary distribution, but has a stationary measure:

$$ \pi _ {j} = \left ( \frac{p}{1-} p \right ) ^ {j} ,\ \ j = 0, \pm 1 ,\dots . $$

One of the possible probabilistic interpretations of a stationary measure $ \{ \pi _ {j} \} $ of a Markov chain $ \xi ( t) $ with set of states $ S $ is as follows. Let there be a countable set of independent realizations of $ \xi ( t) $, and let $ \eta _ {t} ( i) $ be the number of realizations for which $ \xi ( t) = i $. If the random variables $ \eta _ {0} ( i) $, $ i \in S $, are independent and are subject to Poisson distributions with respective means $ \pi _ {i} $, $ i \in S $, then for any $ t > 0 $ the random variables $ \eta _ {t} ( i) $, $ i \in S $, are independent and have the same distributions as $ \eta _ {0} ( i) $, $ i \in S $.

References

[C] K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1960) MR0116388 Zbl 0092.34304
[K] S. Karlin, "A first course in stochastic processes" , Acad. Press (1966) MR0208657 Zbl 0315.60016 Zbl 0226.60052 Zbl 0177.21102

Comments

Stationary distributions are also defined for more general Markov processes, see e.g. [B].

References

[B] L.P. Breiman, "Probability" , Addison-Wesley (1968) MR0229267 Zbl 0174.48801
How to Cite This Entry:
Stationary distribution. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Stationary_distribution&oldid=23656
This article was adapted from an original article by A.M. Zubkov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article