Namespaces
Variants
Actions

Difference between revisions of "Markov process, stationary"

From Encyclopedia of Mathematics
Jump to: navigation, search
(→‎References: Feller: internal link)
m (tex encoded by computer)
 
Line 1: Line 1:
A [[Markov process|Markov process]] which is a [[Stationary stochastic process|stationary stochastic process]]. There is a stationary Markov process associated with a homogeneous Markov [[Transition function|transition function]] if and only if there is a stationary initial distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m0625001.png" /> corresponding to this function, that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m0625002.png" /> satisfies
+
<!--
 +
m0625001.png
 +
$#A+1 = 29 n = 0
 +
$#C+1 = 29 : ~/encyclopedia/old_files/data/M062/M.0602500 Markov process, stationary
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m0625003.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
If the phase space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m0625004.png" /> is finite, then a stationary initial distribution always exists, independent of whether the process has discrete <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m0625005.png" /> or continuous time. For a process in discrete time and for a countable set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m0625006.png" />, a condition for existence of a stationary distribution has been found by A.N. Kolmogorov [[#References|[1]]]: It is necessary and sufficient that there is class of communicating states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m0625007.png" /> such that the mathematical expectation of the time for reaching <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m0625008.png" /> from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m0625009.png" /> is finite for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250010.png" />. This criterion has been generalized to strong Markov processes with an arbitrary phase space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250011.png" />: For the existence of a stationary process it is sufficient that there is a compact set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250012.png" /> such that the expectation of the time of reaching <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250013.png" /> from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250014.png" /> is finite for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250015.png" />. There is the following sufficient condition for the existence of a stationary Markov process in terms of Lyapunov stochastic functions (cf. [[Lyapunov stochastic function|Lyapunov stochastic function]]): If there is a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250016.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250017.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250018.png" />, then there is a stationary Markov process associated with the Markov transition function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250019.png" />. Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250020.png" /> is the infinitesimal generator of the process.
+
A [[Markov process|Markov process]] which is a [[Stationary stochastic process|stationary stochastic process]]. There is a stationary Markov process associated with a homogeneous Markov [[Transition function|transition function]] if and only if there is a stationary initial distribution  $  \mu ( A) $
 +
corresponding to this function, that is,  $  \mu ( A) $
 +
satisfies
  
When the stationary initial distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250021.png" /> is unique, the corresponding stationary process is ergodic. In this case the Cesàro mean of the transition probabilities converges weakly to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250022.png" />. Under certain additional conditions,
+
$$
 +
\mu ( A)  = \int\limits _ { X } P ( x , t , A ) \mu ( d x ) .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250023.png" /></td> </tr></table>
+
If the phase space  $  X $
 +
is finite, then a stationary initial distribution always exists, independent of whether the process has discrete  $  ( t= 0 , 1 ,\dots) $
 +
or continuous time. For a process in discrete time and for a countable set  $  X $,
 +
a condition for existence of a stationary distribution has been found by A.N. Kolmogorov [[#References|[1]]]: It is necessary and sufficient that there is class of communicating states  $  Y \subset  X $
 +
such that the mathematical expectation of the time for reaching  $  y _ {2} \in Y $
 +
from  $  y _ {1} \in Y $
 +
is finite for any  $  y _ {1} \in Y $.
 +
This criterion has been generalized to strong Markov processes with an arbitrary phase space  $  X $:  
 +
For the existence of a stationary process it is sufficient that there is a compact set  $  K \subset  X $
 +
such that the expectation of the time of reaching  $  K $
 +
from  $  x $
 +
is finite for all  $  x \in X $.
 +
There is the following sufficient condition for the existence of a stationary Markov process in terms of Lyapunov stochastic functions (cf. [[Lyapunov stochastic function|Lyapunov stochastic function]]): If there is a function  $  V ( x) \leq  0 $
 +
for which  $  L V ( x) \leq  - 1 $
 +
for  $  x \notin K $,
 +
then there is a stationary Markov process associated with the Markov transition function  $  P ( x , t , A ) $.  
 +
Here  $  L $
 +
is the infinitesimal generator of the process.
  
A stationary initial distribution satisfies the Fokker–Planck(–Kolmogorov) equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250024.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250025.png" /> is the adjoint operator to the infinitesimal operator of the process. For example, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250026.png" /> is the adjoint operator to the generating differential operator of the process for diffusion processes. In this case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250027.png" /> has a density <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250028.png" /> with respect to the Lebesgue measure which satisfies <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062500/m06250029.png" />. In the one-dimensional case this equation can be solved by quadrature.
+
When the stationary initial distribution  $  \mu $
 +
is unique, the corresponding stationary process is ergodic. In this case the Cesàro mean of the transition probabilities converges weakly to  $  \mu $.
 +
Under certain additional conditions,
 +
 
 +
$$
 +
\lim\limits _ {t \rightarrow \infty }  P ( x , t , A )  = \
 +
\mu ( A) \  ( \textrm{ weakly } ) .
 +
$$
 +
 
 +
A stationary initial distribution satisfies the Fokker–Planck(–Kolmogorov) equation $  L  ^ {*} \mu = 0 $,  
 +
where $  L  ^ {*} $
 +
is the adjoint operator to the infinitesimal operator of the process. For example, $  L  ^ {*} $
 +
is the adjoint operator to the generating differential operator of the process for diffusion processes. In this case $  \mu $
 +
has a density $  p $
 +
with respect to the Lebesgue measure which satisfies $  L  ^ {*} p = 0 $.  
 +
In the one-dimensional case this equation can be solved by quadrature.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.N. Kolmogorov,  "Markov chains with a countable number of states" , Moscow  (1937)  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J.L. Doob,  "Stochastic processes" , Wiley  (1953)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  A.B. Sevast'yanov,  "An ergodic theorem for Markov processes and its application to telephone systems with refusals"  ''Theor. Probab. Appl.'' , '''2'''  (1957)  pp. 104–112  ''Teor. Veroyatnost. i Primenen.'' , '''2''' :  1  (1957)  pp. 106–116</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.N. Kolmogorov,  "Markov chains with a countable number of states" , Moscow  (1937)  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J.L. Doob,  "Stochastic processes" , Wiley  (1953)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  A.B. Sevast'yanov,  "An ergodic theorem for Markov processes and its application to telephone systems with refusals"  ''Theor. Probab. Appl.'' , '''2'''  (1957)  pp. 104–112  ''Teor. Veroyatnost. i Primenen.'' , '''2''' :  1  (1957)  pp. 106–116</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> K.L. Chung, "Markov chains with stationary transition probabilities", Springer (1960)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> W. Feller, [[Feller, "An introduction to probability theory and its  applications"|"An introduction to probability theory and its  applications"]], '''1–2''', Wiley (1966)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> P. Lévy, "Processus stochastiques et mouvement Brownien", Gauthier-Villars (1965)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> E. Parzen, "Stochastic processes", Holden-Day (1962)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> Yu.A. Rozanov, "Stationary random processes", Holden-Day (1967) (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> K.L. Chung, "Markov chains with stationary transition probabilities", Springer (1960)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> W. Feller, [[Feller, "An introduction to probability theory and its  applications"|"An introduction to probability theory and its  applications"]], '''1–2''', Wiley (1966)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> P. Lévy, "Processus stochastiques et mouvement Brownien", Gauthier-Villars (1965)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> E. Parzen, "Stochastic processes", Holden-Day (1962)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> Yu.A. Rozanov, "Stationary random processes", Holden-Day (1967) (Translated from Russian)</TD></TR></table>

Latest revision as of 07:59, 6 June 2020


A Markov process which is a stationary stochastic process. There is a stationary Markov process associated with a homogeneous Markov transition function if and only if there is a stationary initial distribution $ \mu ( A) $ corresponding to this function, that is, $ \mu ( A) $ satisfies

$$ \mu ( A) = \int\limits _ { X } P ( x , t , A ) \mu ( d x ) . $$

If the phase space $ X $ is finite, then a stationary initial distribution always exists, independent of whether the process has discrete $ ( t= 0 , 1 ,\dots) $ or continuous time. For a process in discrete time and for a countable set $ X $, a condition for existence of a stationary distribution has been found by A.N. Kolmogorov [1]: It is necessary and sufficient that there is class of communicating states $ Y \subset X $ such that the mathematical expectation of the time for reaching $ y _ {2} \in Y $ from $ y _ {1} \in Y $ is finite for any $ y _ {1} \in Y $. This criterion has been generalized to strong Markov processes with an arbitrary phase space $ X $: For the existence of a stationary process it is sufficient that there is a compact set $ K \subset X $ such that the expectation of the time of reaching $ K $ from $ x $ is finite for all $ x \in X $. There is the following sufficient condition for the existence of a stationary Markov process in terms of Lyapunov stochastic functions (cf. Lyapunov stochastic function): If there is a function $ V ( x) \leq 0 $ for which $ L V ( x) \leq - 1 $ for $ x \notin K $, then there is a stationary Markov process associated with the Markov transition function $ P ( x , t , A ) $. Here $ L $ is the infinitesimal generator of the process.

When the stationary initial distribution $ \mu $ is unique, the corresponding stationary process is ergodic. In this case the Cesàro mean of the transition probabilities converges weakly to $ \mu $. Under certain additional conditions,

$$ \lim\limits _ {t \rightarrow \infty } P ( x , t , A ) = \ \mu ( A) \ ( \textrm{ weakly } ) . $$

A stationary initial distribution satisfies the Fokker–Planck(–Kolmogorov) equation $ L ^ {*} \mu = 0 $, where $ L ^ {*} $ is the adjoint operator to the infinitesimal operator of the process. For example, $ L ^ {*} $ is the adjoint operator to the generating differential operator of the process for diffusion processes. In this case $ \mu $ has a density $ p $ with respect to the Lebesgue measure which satisfies $ L ^ {*} p = 0 $. In the one-dimensional case this equation can be solved by quadrature.

References

[1] A.N. Kolmogorov, "Markov chains with a countable number of states" , Moscow (1937) (In Russian)
[2] J.L. Doob, "Stochastic processes" , Wiley (1953)
[3] A.B. Sevast'yanov, "An ergodic theorem for Markov processes and its application to telephone systems with refusals" Theor. Probab. Appl. , 2 (1957) pp. 104–112 Teor. Veroyatnost. i Primenen. , 2 : 1 (1957) pp. 106–116

Comments

References

[a1] K.L. Chung, "Markov chains with stationary transition probabilities", Springer (1960)
[a2] W. Feller, "An introduction to probability theory and its applications", 1–2, Wiley (1966)
[a3] P. Lévy, "Processus stochastiques et mouvement Brownien", Gauthier-Villars (1965)
[a4] E. Parzen, "Stochastic processes", Holden-Day (1962)
[a5] Yu.A. Rozanov, "Stationary random processes", Holden-Day (1967) (Translated from Russian)
How to Cite This Entry:
Markov process, stationary. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Markov_process,_stationary&oldid=25956
This article was adapted from an original article by R.Z. Khas'minskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article