Namespaces
Variants
Actions

Difference between revisions of "Absorbing state"

From Encyclopedia of Mathematics
Jump to: navigation, search
(refs format)
m (tex encoded by computer)
 
Line 1: Line 1:
''of a Markov chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a0104301.png" />''
+
<!--
 +
a0104301.png
 +
$#A+1 = 26 n = 0
 +
$#C+1 = 26 : ~/encyclopedia/old_files/data/A010/A.0100430 Absorbing state
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
 +
''of a Markov chain  $  \xi (t) $''
  
 
{{MSC|60J10}}
 
{{MSC|60J10}}
Line 5: Line 17:
 
[[Category:Markov chains]]
 
[[Category:Markov chains]]
  
A state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a0104302.png" /> such that
+
A state $  i $
 +
such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a0104303.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ \xi (t) = i \mid  \xi (s) = i \}  = 1
 +
\  \textrm{ for }  \textrm{ any }  t \geq  s.
 +
$$
  
An example of a [[Markov chain|Markov chain]] with absorbing state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a0104304.png" /> is a [[Branching process|branching process]].
+
An example of a [[Markov chain|Markov chain]] with absorbing state 0 $
 +
is a [[Branching process|branching process]].
  
 
The introduction of additional absorbing states is a convenient technique that enables one to examine the properties of trajectories of a Markov chain that are associated with hitting some set.
 
The introduction of additional absorbing states is a convenient technique that enables one to examine the properties of trajectories of a Markov chain that are associated with hitting some set.
  
Example. Consider the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a0104305.png" /> of states of a homogeneous Markov chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a0104306.png" /> with discrete time and transition probabilities
+
Example. Consider the set $  S $
 +
of states of a homogeneous Markov chain $  \xi (t) $
 +
with discrete time and transition probabilities
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a0104307.png" /></td> </tr></table>
+
$$
 +
p _ {ij}  = {\mathsf P} \{ \xi (t+1) = j \mid  \xi (t) = i \} ,
 +
$$
  
in which a subset <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a0104308.png" /> is distinguished and suppose one has to find the probabilities
+
in which a subset $  H $
 +
is distinguished and suppose one has to find the probabilities
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a0104309.png" /></td> </tr></table>
+
$$
 +
q _ {ih}  = {\mathsf P} \{ \xi ( \tau (H)) = h \mid  \xi (0) = i \} ,
 +
\  i \in S,\  h \in H,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043010.png" /> is the moment of first hitting the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043011.png" />. If one introduces the auxiliary Markov chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043012.png" /> differing from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043013.png" /> only in that all states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043014.png" /> are absorbing in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043015.png" />, then for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043016.png" /> the probabilities
+
where $  \tau (H) = \mathop{\rm min} \{ {t > 0 } : {\tau (t) \in H } \} $
 +
is the moment of first hitting the set $  H $.  
 +
If one introduces the auxiliary Markov chain $  \xi  ^ {*} (t) $
 +
differing from $  \xi (t) $
 +
only in that all states $  h \in H $
 +
are absorbing in $  \xi  ^ {*} (t) $,  
 +
then for $  h \in H $
 +
the probabilities
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043017.png" /></td> </tr></table>
+
$$
 +
p _ {ih}  ^ {*} (t)  = {\mathsf P} \{ \xi  ^ {*} (t) =
 +
h \mid  \xi  ^ {*} (0) = i \} =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043018.png" /></td> </tr></table>
+
$$
 +
= \
 +
{\mathsf P} \{ \tau (H) \leq  t, \xi ( \tau (H)) = h \mid  \xi (0) = i \}
 +
$$
  
are monotonically non-decreasing for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043019.png" /> and
+
are monotonically non-decreasing for $  t \uparrow \infty $
 +
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043020.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
$$ \tag{* }
 +
q _ {ih}  = \lim\limits _ {t \rightarrow \infty }  p _ {ih}  ^ {*} (t),
 +
\  i \in S,\  h \in H.
 +
$$
  
 
By virtue of the basic definition of a Markov chain
 
By virtue of the basic definition of a Markov chain
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043021.png" /></td> </tr></table>
+
$$
 +
p _ {ih}  ^ {*} (t + 1)  = \
 +
\sum _ {j \in S } p _ {ij} p _ {ih}  ^ {*} (t),
 +
\  t \geq  0,\  i \in S \setminus  H,\  h \in H,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043022.png" /></td> </tr></table>
+
$$
 +
p _ {hh}  ^ {*} (t)  = 1,\  h \in H; \  p _ {ih}  ^ {*} (t)  = 0,\  i, h \in H, i \neq h.
 +
$$
  
The passage to the limit for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043023.png" /> taking into account (*) gives a system of linear equations for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043024.png" />:
+
The passage to the limit for $  t \rightarrow \infty $
 +
taking into account (*) gives a system of linear equations for $  q _ {ih} $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043025.png" /></td> </tr></table>
+
$$
 +
q _ {ih}  = \sum _ {j \in S } p _ {ij} q _ {ih} ,\ \
 +
i \in S \setminus  H,\  h \in H,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010430/a01043026.png" /></td> </tr></table>
+
$$
 +
q _ {hh}  = 1,\  h \in H ; \  q _ {ih}  = 0,\  i, h \in H, i \neq h.
 +
$$
  
 
====References====
 
====References====

Latest revision as of 16:08, 1 April 2020


of a Markov chain $ \xi (t) $

2020 Mathematics Subject Classification: Primary: 60J10 [MSN][ZBL]

A state $ i $ such that

$$ {\mathsf P} \{ \xi (t) = i \mid \xi (s) = i \} = 1 \ \textrm{ for } \textrm{ any } t \geq s. $$

An example of a Markov chain with absorbing state $ 0 $ is a branching process.

The introduction of additional absorbing states is a convenient technique that enables one to examine the properties of trajectories of a Markov chain that are associated with hitting some set.

Example. Consider the set $ S $ of states of a homogeneous Markov chain $ \xi (t) $ with discrete time and transition probabilities

$$ p _ {ij} = {\mathsf P} \{ \xi (t+1) = j \mid \xi (t) = i \} , $$

in which a subset $ H $ is distinguished and suppose one has to find the probabilities

$$ q _ {ih} = {\mathsf P} \{ \xi ( \tau (H)) = h \mid \xi (0) = i \} , \ i \in S,\ h \in H, $$

where $ \tau (H) = \mathop{\rm min} \{ {t > 0 } : {\tau (t) \in H } \} $ is the moment of first hitting the set $ H $. If one introduces the auxiliary Markov chain $ \xi ^ {*} (t) $ differing from $ \xi (t) $ only in that all states $ h \in H $ are absorbing in $ \xi ^ {*} (t) $, then for $ h \in H $ the probabilities

$$ p _ {ih} ^ {*} (t) = {\mathsf P} \{ \xi ^ {*} (t) = h \mid \xi ^ {*} (0) = i \} = $$

$$ = \ {\mathsf P} \{ \tau (H) \leq t, \xi ( \tau (H)) = h \mid \xi (0) = i \} $$

are monotonically non-decreasing for $ t \uparrow \infty $ and

$$ \tag{* } q _ {ih} = \lim\limits _ {t \rightarrow \infty } p _ {ih} ^ {*} (t), \ i \in S,\ h \in H. $$

By virtue of the basic definition of a Markov chain

$$ p _ {ih} ^ {*} (t + 1) = \ \sum _ {j \in S } p _ {ij} p _ {ih} ^ {*} (t), \ t \geq 0,\ i \in S \setminus H,\ h \in H, $$

$$ p _ {hh} ^ {*} (t) = 1,\ h \in H; \ p _ {ih} ^ {*} (t) = 0,\ i, h \in H, i \neq h. $$

The passage to the limit for $ t \rightarrow \infty $ taking into account (*) gives a system of linear equations for $ q _ {ih} $:

$$ q _ {ih} = \sum _ {j \in S } p _ {ij} q _ {ih} ,\ \ i \in S \setminus H,\ h \in H, $$

$$ q _ {hh} = 1,\ h \in H ; \ q _ {ih} = 0,\ i, h \in H, i \neq h. $$

References

[F] W. Feller, "An introduction to probability theory and its applications", 1, Wiley (1968)
How to Cite This Entry:
Absorbing state. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Absorbing_state&oldid=26359
This article was adapted from an original article by A.M. Zubkov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article