Namespaces
Variants
Actions

Difference between revisions of "Jump process"

From Encyclopedia of Mathematics
Jump to: navigation, search
(→‎References: Feller: internal link)
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
j0544201.png
 +
$#A+1 = 101 n = 0
 +
$#C+1 = 101 : ~/encyclopedia/old_files/data/J054/J.0504420 Jump process
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A [[Stochastic process|stochastic process]] that changes its state only at random moments of time forming an increasing sequence. The term  "jump process"  is sometimes applied to any process with piecewise-constant trajectories.
 
A [[Stochastic process|stochastic process]] that changes its state only at random moments of time forming an increasing sequence. The term  "jump process"  is sometimes applied to any process with piecewise-constant trajectories.
  
An important class of jump processes is formed by Markov jump processes. A [[Markov process|Markov process]] is a jump process if its [[Transition function|transition function]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j0544201.png" /> is such that
+
An important class of jump processes is formed by Markov jump processes. A [[Markov process|Markov process]] is a jump process if its [[Transition function|transition function]] $  P ( s , x , t , B) $
 +
is such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j0544202.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
\lim\limits _ {t \downarrow s } \
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j0544203.png" /> is the indicator of the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j0544204.png" /> in the phase space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j0544205.png" />, and if the regularity condition holds, i.e. the convergence in (1) is uniform and the kernel <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j0544206.png" /> satisfies certain boundedness and continuity conditions.
+
\frac{P ( s , x , t , B ) - I _ {B} ( x) }{t - s }
 +
  = \
 +
q ( s , x , B ) ,
 +
$$
 +
 
 +
where $  I _ {B} ( x) $
 +
is the indicator of the set $  B $
 +
in the phase space $  ( E , {\mathcal E} ) $,  
 +
and if the regularity condition holds, i.e. the convergence in (1) is uniform and the kernel $  q ( s , x , B ) $
 +
satisfies certain boundedness and continuity conditions.
  
 
Let
 
Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j0544207.png" /></td> </tr></table>
+
$$
 +
a ( t , x )  = - q ( t , x , \{ x \} ) ,\ \
 +
a ( t , x , B )  = q ( t , x , B \setminus  \{ x \} ) ,
 +
$$
 +
 
 +
$$
 +
\Phi ( t , x , B )  = \left \{
 +
\begin{array}{ll}
 +
 
 +
\frac{a
 +
( t , x , B ) }{a ( t , x ) }
 +
  & \textrm{ if }  a ( t , x ) > 0 ,  \\
 +
0  & \textrm{ otherwise } . \\
 +
\end{array}
 +
 
 +
\right .$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j0544208.png" /></td> </tr></table>
+
These quantities admit the following interpretation: up to  $  o ( \Delta t ) $(
 +
as  $  \Delta t \rightarrow 0 $),
 +
$  a ( t , x ) \Delta t $
 +
is the probability that in the time interval  $  ( t , t + \Delta t ) $
 +
the process leaves the state  $  x $,
 +
and  $  \Phi ( t , x , B ) $(
 +
for  $  a ( t , x ) > 0 $)
 +
is the conditional probability that the process hits the set  $  B $,
 +
provided that it leaves the state  $  x $
 +
at the time  $  t $.
  
These quantities admit the following interpretation: up to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j0544209.png" /> (as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442010.png" />), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442011.png" /> is the probability that in the time interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442012.png" /> the process leaves the state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442013.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442014.png" /> (for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442015.png" />) is the conditional probability that the process hits the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442016.png" />, provided that it leaves the state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442017.png" /> at the time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442018.png" />.
+
When the regularity conditions hold, the transition function of a jump process is differentiable with respect to  $  t $
 +
when  $  t > s $
 +
and with respect to  $  s $
 +
when  $  s < t $,  
 +
and satisfies the forward and backward [[Kolmogorov equation|Kolmogorov equation]] with corresponding boundary conditions:
  
When the regularity conditions hold, the transition function of a jump process is differentiable with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442019.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442020.png" /> and with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442021.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442022.png" />, and satisfies the forward and backward [[Kolmogorov equation|Kolmogorov equation]] with corresponding boundary conditions:
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442023.png" /></td> </tr></table>
+
\frac{\partial  P ( s , x , t , B ) }{\partial  t }
 +
  = -
 +
\int\limits _ { B }
 +
a ( t , y ) P ( s , x , t , d y ) +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442024.png" /></td> </tr></table>
+
$$
 +
+
 +
\int\limits _ { E } a ( t , y , B ) P ( s , x , t , d y ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442025.png" /></td> </tr></table>
+
$$
 +
\lim\limits _ {t \downarrow s }  P ( s , x , t , B )  = I _ {B} ( x) ;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442026.png" /></td> </tr></table>
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442027.png" /></td> </tr></table>
+
\frac{\partial  P ( s , x , t , B ) }{\partial  s }
 +
  = \
 +
a ( s , x ) \left [ P ( s , x , t , B ) +
 +
\right .$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442028.png" /></td> </tr></table>
+
$$
 +
\left . - \int\limits _ { E } P ( s , y , t , B ) \Phi ( s , x , d y ) \right ] ,
 +
$$
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442029.png" /> be a strictly Markov jump process continuous from the right, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442030.png" /> be the moment of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442031.png" />-th jump of the process, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442032.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442033.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442034.png" /> be the duration of remaining in state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442035.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442036.png" /> be the moment of cut-off, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442037.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442038.png" /> is a point outside <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442039.png" />. Then the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442040.png" /> forms a homogeneous [[Markov chain|Markov chain]]. Note that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442041.png" /> is a homogeneous Markov process, then at a prescribed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442042.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442043.png" /> is exponentially distributed with parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442044.png" />.
+
$$
 +
\lim\limits _ {s \uparrow t }  P ( s , x , t , B )  = I _ {B} ( x) .
 +
$$
  
A natural generalization of Markov jump processes are semi-Markov jump processes, for which the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442045.png" /> is a Markov chain but the duration of remaining in the state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442046.png" /> depends on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442047.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442048.png" />, and has an arbitrary distribution.
+
Let  $  X = ( X _ {t} ) _ {t \geq  0 }  $
 +
be a strictly Markov jump process continuous from the right, let  $  T _ {n} $
 +
be the moment of the  $  n $-
 +
th jump of the process, $  T _ {0} = 0 $,
 +
let  $  Y _ {n} = X _ {T _ {n}  } $,
 +
let  $  S _ {n} $
 +
be the duration of remaining in state $  n $,
 +
let  $  T _  \infty  = \lim\limits  T _ {n} $
 +
be the moment of cut-off, and let  $  X _ {T _  \infty  } = \delta $,
 +
where  $  \delta $
 +
is a point outside  $  E $.  
 +
Then the sequence  $  ( T _ {n} , Y _ {n} ) $
 +
forms a homogeneous [[Markov chain|Markov chain]]. Note that if  $  X $
 +
is a homogeneous Markov process, then at a prescribed  $  Y _ {n} = x $,  
 +
$  S _ {n} $
 +
is exponentially distributed with parameter  $  \lambda ( x) $.
  
In the investigation of general jump processes, the so-called martingale approach has proved fruitful. Within the boundaries of this approach one can obtain meaningful results without additional assumptions about the probability structure of the processes. In the martingale approach one assumes that on the probability space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442049.png" /> of a given jump process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442050.png" /> a non-decreasing right-continuous family of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442051.png" />-algebras <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442052.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442053.png" />, is fixed such that the random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442054.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442055.png" />-measurable for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442056.png" />, so that the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442057.png" /> are Markov moments.
+
A natural generalization of Markov jump processes are semi-Markov jump processes, for which the sequence  $  ( Y _ {n} ) $
 +
is a Markov chain but the duration of remaining in the state  $  n $
 +
depends on $  Y _ {n} $
 +
and  $  Y _ {n+} 1 $,  
 +
and has an arbitrary distribution.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442058.png" /> be a [[Predictable sigma-algebra|predictable sigma-algebra]] on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442059.png" />, and put <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442060.png" />. A random measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442061.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442062.png" /> is said to be predictable if for any non-negative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442063.png" />-measurable function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442064.png" /> the process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442065.png" />, where
+
In the investigation of general jump processes, the so-called martingale approach has proved fruitful. Within the boundaries of this approach one can obtain meaningful results without additional assumptions about the probability structure of the processes. In the martingale approach one assumes that on the probability space  $  ( \Omega , {\mathcal E} , {\mathsf P} ) $
 +
of a given jump process  $  X $
 +
a non-decreasing right-continuous family of  $  \sigma $-
 +
algebras  $  {\mathcal F} =( {\mathcal F} _ {t} ) _ {t \geq  0 }  $,
 +
$  {\mathcal F} _ {t} \subset  {\mathcal F} $,  
 +
is fixed such that the random variable  $  X _ {t} $
 +
is $  {\mathcal F} _ {t} $-
 +
measurable for every  $  t $,
 +
so that the $  T _ {n} $
 +
are Markov moments.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442066.png" /></td> </tr></table>
+
Let  $  {\mathcal P} $
 +
be a [[Predictable sigma-algebra|predictable sigma-algebra]] on  $  \Omega \times \mathbf R _ {+} $,
 +
and put  $  {\mathcal P}  tilde = {\mathcal P} \times {\mathcal E} $.
 +
A random measure  $  \eta $
 +
on  $  ( \mathbf R _ {+} \times E , {\mathcal B} ( \mathbf R _ {+} ) \otimes {\mathcal E} ) $
 +
is said to be predictable if for any non-negative  $  {\mathcal P}  tilde $-
 +
measurable function  $  f $
 +
the process  $  ( f \star \eta _ {t} ) _ {t \geq  0 }  $,
 +
where
 +
 
 +
$$
 +
f \star \eta _ {t}  = \
 +
\int\limits _ {( 0 , t ] \times E }
 +
f ( t , x ) \eta ( d t , d x ) ,
 +
$$
  
 
is predictable.
 
is predictable.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442067.png" /> be the jump measure of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442068.png" />, that is, the integral random measure on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442069.png" /> given by
+
Let $  \mu = \mu ( d t , d x ) $
 +
be the jump measure of $  X $,  
 +
that is, the integral random measure on $  ( \mathbf R _ {+} \times E , {\mathcal B} ( \mathbf R _ {+} ) \otimes {\mathcal E} ) $
 +
given by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442070.png" /></td> </tr></table>
+
$$
 +
\mu ( [ 0 , t ] \times B )  = \
 +
\sum _ {n \geq  1 }
 +
I _ {[ 0 , t ] \times B }
 +
( T _ {n} , Y _ {n} ) ,\ \
 +
t \in \mathbf R _ {+} ,\ \
 +
B \in {\mathcal E} .
 +
$$
  
Under very general conditions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442071.png" /> (that hold, for example, when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442072.png" /> is a complete separable metric space with a Borel <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442073.png" />-algebra <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442074.png" />), there is a predictable random measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442075.png" /> such that either of the following two equivalent conditions hold:
+
Under very general conditions on $  ( E , {\mathcal E} ) $(
 +
that hold, for example, when $  E $
 +
is a complete separable metric space with a Borel $  \sigma $-
 +
algebra $  {\mathcal E} $),  
 +
there is a predictable random measure $  \nu = \nu ( d t , d x ) $
 +
such that either of the following two equivalent conditions hold:
  
1) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442076.png" /> for any non-negative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442077.png" />-measurable function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442078.png" />;
+
1) $  {\mathsf E} f \star \mu _  \infty  = {\mathsf E} f \star \nu _  \infty  $
 +
for any non-negative $  {\mathcal P}  tilde $-
 +
measurable function $  f $;
  
2) for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442079.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442080.png" /> the process
+
2) for all $  n \geq  1 $
 +
and $  B \in {\mathcal E} $
 +
the process
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442081.png" /></td> </tr></table>
+
$$
 +
( \mu ( [ 0 , t \wedge T _ {n} ] \times B ) - \nu
 +
( [ 0 , t \wedge T _ {n} ] \times B ) ) _ {t \geq  0 }
 +
$$
  
 
is a [[Martingale|martingale]] emanating from zero.
 
is a [[Martingale|martingale]] emanating from zero.
  
The predictable random measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442082.png" /> is uniquely defined up to a set of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442083.png" />-measure zero and is called the compensator (or dual predictable projection) of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442084.png" />. One can choose a variant of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442085.png" /> such that
+
The predictable random measure $  \nu $
 +
is uniquely defined up to a set of $  {\mathsf P} $-
 +
measure zero and is called the compensator (or dual predictable projection) of $  \mu $.  
 +
One can choose a variant of $  \nu $
 +
such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442086.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
\nu ( ( T _  \infty  , \infty ) \times E )  = 0 ,\ \
 +
\nu ( \{ t \} \times E) \leq  1 \
 +
\textrm{ for  all  }  t .
 +
$$
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442087.png" /> be the space of trajectories of a jump process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442088.png" />, taking values in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442089.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442090.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442091.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442092.png" /> be a probability measure for which (2) holds. Then there is a probability measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442093.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442094.png" />, which is also unique, such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442095.png" /> is the compensator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442096.png" /> with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442097.png" /> and such that the restriction of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442098.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j05442099.png" /> coincides with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j054420100.png" />. The proof of this relies on an explicit formula relating the conditional distributions of the variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j054/j054420/j054420101.png" /> to the compensator, which in a number of cases has turned out to be a more convenient means of describing jump processes.
+
Let $  \Omega $
 +
be the space of trajectories of a jump process $  X $,  
 +
taking values in $  ( E , {\mathcal E} ) $,  
 +
let $  {\mathcal F} _ {t} = \sigma ( X _ {s} , s \leq  t ) $,
 +
$  {\mathcal F} = \cup _ {t > 0 }  {\mathcal F} _ {t} $,  
 +
and let $  {\mathsf P} _ {0} $
 +
be a probability measure for which (2) holds. Then there is a probability measure $  {\mathsf P} $
 +
on $  ( \Omega , {\mathcal F} ) $,  
 +
which is also unique, such that $  \nu $
 +
is the compensator of $  \mu $
 +
with respect to $  {\mathsf P} $
 +
and such that the restriction of $  {\mathsf P} $
 +
to $  {\mathcal F} _ {0} $
 +
coincides with $  {\mathsf P} _ {0} $.  
 +
The proof of this relies on an explicit formula relating the conditional distributions of the variables $  ( T _ {n} , Y _ {n} ) $
 +
to the compensator, which in a number of cases has turned out to be a more convenient means of describing jump processes.
  
 
A jump process is a [[Stochastic process with independent increments|stochastic process with independent increments]] if and only if the corresponding compensator is determinate.
 
A jump process is a [[Stochastic process with independent increments|stochastic process with independent increments]] if and only if the corresponding compensator is determinate.
Line 65: Line 219:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.N. [A.N. Kolmogorov] Kolmogoroff,  "Ueber die analytischen Methoden in der Wahrscheinlichkeitstheorie"  ''Math. Ann.'' , '''104'''  (1931)  pp. 415–458</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.I. [I.I. Gikhman] Gihman,  A.V. [A.V. Skorokhod] Skorohod,  "The theory of stochastic processes" , '''2''' , Springer  (1975)  pp. Chapt. 3  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J. Jacod,  "Calcul stochastique et problèmes de martingales" , ''Lect. notes in math.'' , '''714''' , Springer  (1979)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.N. [A.N. Kolmogorov] Kolmogoroff,  "Ueber die analytischen Methoden in der Wahrscheinlichkeitstheorie"  ''Math. Ann.'' , '''104'''  (1931)  pp. 415–458</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.I. [I.I. Gikhman] Gihman,  A.V. [A.V. Skorokhod] Skorohod,  "The theory of stochastic processes" , '''2''' , Springer  (1975)  pp. Chapt. 3  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J. Jacod,  "Calcul stochastique et problèmes de martingales" , ''Lect. notes in math.'' , '''714''' , Springer  (1979)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> E.B. Dynkin, "Markov processes", '''I''', Springer (1965) pp. Chapt. 3 (Translated from Russian)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> W. Feller, [[Feller, "An introduction to probability theory and its  applications"|"An introduction to probability theory and its  applications"]], '''2''', Wiley (1966) pp. Chapt. X</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> M. Rosenblatt, "Random processes", Springer (1974)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> L.P. Breiman, "Probability", Addison-Wesley (1968)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> E.B. Dynkin, "Markov processes", '''I''', Springer (1965) pp. Chapt. 3 (Translated from Russian)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> W. Feller, [[Feller, "An introduction to probability theory and its  applications"|"An introduction to probability theory and its  applications"]], '''2''', Wiley (1966) pp. Chapt. X</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> M. Rosenblatt, "Random processes", Springer (1974)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> L.P. Breiman, "Probability", Addison-Wesley (1968)</TD></TR></table>

Latest revision as of 22:14, 5 June 2020


A stochastic process that changes its state only at random moments of time forming an increasing sequence. The term "jump process" is sometimes applied to any process with piecewise-constant trajectories.

An important class of jump processes is formed by Markov jump processes. A Markov process is a jump process if its transition function $ P ( s , x , t , B) $ is such that

$$ \tag{1 } \lim\limits _ {t \downarrow s } \ \frac{P ( s , x , t , B ) - I _ {B} ( x) }{t - s } = \ q ( s , x , B ) , $$

where $ I _ {B} ( x) $ is the indicator of the set $ B $ in the phase space $ ( E , {\mathcal E} ) $, and if the regularity condition holds, i.e. the convergence in (1) is uniform and the kernel $ q ( s , x , B ) $ satisfies certain boundedness and continuity conditions.

Let

$$ a ( t , x ) = - q ( t , x , \{ x \} ) ,\ \ a ( t , x , B ) = q ( t , x , B \setminus \{ x \} ) , $$

$$ \Phi ( t , x , B ) = \left \{ \begin{array}{ll} \frac{a ( t , x , B ) }{a ( t , x ) } & \textrm{ if } a ( t , x ) > 0 , \\ 0 & \textrm{ otherwise } . \\ \end{array} \right .$$

These quantities admit the following interpretation: up to $ o ( \Delta t ) $( as $ \Delta t \rightarrow 0 $), $ a ( t , x ) \Delta t $ is the probability that in the time interval $ ( t , t + \Delta t ) $ the process leaves the state $ x $, and $ \Phi ( t , x , B ) $( for $ a ( t , x ) > 0 $) is the conditional probability that the process hits the set $ B $, provided that it leaves the state $ x $ at the time $ t $.

When the regularity conditions hold, the transition function of a jump process is differentiable with respect to $ t $ when $ t > s $ and with respect to $ s $ when $ s < t $, and satisfies the forward and backward Kolmogorov equation with corresponding boundary conditions:

$$ \frac{\partial P ( s , x , t , B ) }{\partial t } = - \int\limits _ { B } a ( t , y ) P ( s , x , t , d y ) + $$

$$ + \int\limits _ { E } a ( t , y , B ) P ( s , x , t , d y ) , $$

$$ \lim\limits _ {t \downarrow s } P ( s , x , t , B ) = I _ {B} ( x) ; $$

$$ \frac{\partial P ( s , x , t , B ) }{\partial s } = \ a ( s , x ) \left [ P ( s , x , t , B ) + \right .$$

$$ \left . - \int\limits _ { E } P ( s , y , t , B ) \Phi ( s , x , d y ) \right ] , $$

$$ \lim\limits _ {s \uparrow t } P ( s , x , t , B ) = I _ {B} ( x) . $$

Let $ X = ( X _ {t} ) _ {t \geq 0 } $ be a strictly Markov jump process continuous from the right, let $ T _ {n} $ be the moment of the $ n $- th jump of the process, $ T _ {0} = 0 $, let $ Y _ {n} = X _ {T _ {n} } $, let $ S _ {n} $ be the duration of remaining in state $ n $, let $ T _ \infty = \lim\limits T _ {n} $ be the moment of cut-off, and let $ X _ {T _ \infty } = \delta $, where $ \delta $ is a point outside $ E $. Then the sequence $ ( T _ {n} , Y _ {n} ) $ forms a homogeneous Markov chain. Note that if $ X $ is a homogeneous Markov process, then at a prescribed $ Y _ {n} = x $, $ S _ {n} $ is exponentially distributed with parameter $ \lambda ( x) $.

A natural generalization of Markov jump processes are semi-Markov jump processes, for which the sequence $ ( Y _ {n} ) $ is a Markov chain but the duration of remaining in the state $ n $ depends on $ Y _ {n} $ and $ Y _ {n+} 1 $, and has an arbitrary distribution.

In the investigation of general jump processes, the so-called martingale approach has proved fruitful. Within the boundaries of this approach one can obtain meaningful results without additional assumptions about the probability structure of the processes. In the martingale approach one assumes that on the probability space $ ( \Omega , {\mathcal E} , {\mathsf P} ) $ of a given jump process $ X $ a non-decreasing right-continuous family of $ \sigma $- algebras $ {\mathcal F} =( {\mathcal F} _ {t} ) _ {t \geq 0 } $, $ {\mathcal F} _ {t} \subset {\mathcal F} $, is fixed such that the random variable $ X _ {t} $ is $ {\mathcal F} _ {t} $- measurable for every $ t $, so that the $ T _ {n} $ are Markov moments.

Let $ {\mathcal P} $ be a predictable sigma-algebra on $ \Omega \times \mathbf R _ {+} $, and put $ {\mathcal P} tilde = {\mathcal P} \times {\mathcal E} $. A random measure $ \eta $ on $ ( \mathbf R _ {+} \times E , {\mathcal B} ( \mathbf R _ {+} ) \otimes {\mathcal E} ) $ is said to be predictable if for any non-negative $ {\mathcal P} tilde $- measurable function $ f $ the process $ ( f \star \eta _ {t} ) _ {t \geq 0 } $, where

$$ f \star \eta _ {t} = \ \int\limits _ {( 0 , t ] \times E } f ( t , x ) \eta ( d t , d x ) , $$

is predictable.

Let $ \mu = \mu ( d t , d x ) $ be the jump measure of $ X $, that is, the integral random measure on $ ( \mathbf R _ {+} \times E , {\mathcal B} ( \mathbf R _ {+} ) \otimes {\mathcal E} ) $ given by

$$ \mu ( [ 0 , t ] \times B ) = \ \sum _ {n \geq 1 } I _ {[ 0 , t ] \times B } ( T _ {n} , Y _ {n} ) ,\ \ t \in \mathbf R _ {+} ,\ \ B \in {\mathcal E} . $$

Under very general conditions on $ ( E , {\mathcal E} ) $( that hold, for example, when $ E $ is a complete separable metric space with a Borel $ \sigma $- algebra $ {\mathcal E} $), there is a predictable random measure $ \nu = \nu ( d t , d x ) $ such that either of the following two equivalent conditions hold:

1) $ {\mathsf E} f \star \mu _ \infty = {\mathsf E} f \star \nu _ \infty $ for any non-negative $ {\mathcal P} tilde $- measurable function $ f $;

2) for all $ n \geq 1 $ and $ B \in {\mathcal E} $ the process

$$ ( \mu ( [ 0 , t \wedge T _ {n} ] \times B ) - \nu ( [ 0 , t \wedge T _ {n} ] \times B ) ) _ {t \geq 0 } $$

is a martingale emanating from zero.

The predictable random measure $ \nu $ is uniquely defined up to a set of $ {\mathsf P} $- measure zero and is called the compensator (or dual predictable projection) of $ \mu $. One can choose a variant of $ \nu $ such that

$$ \tag{2 } \nu ( ( T _ \infty , \infty ) \times E ) = 0 ,\ \ \nu ( \{ t \} \times E) \leq 1 \ \textrm{ for all } t . $$

Let $ \Omega $ be the space of trajectories of a jump process $ X $, taking values in $ ( E , {\mathcal E} ) $, let $ {\mathcal F} _ {t} = \sigma ( X _ {s} , s \leq t ) $, $ {\mathcal F} = \cup _ {t > 0 } {\mathcal F} _ {t} $, and let $ {\mathsf P} _ {0} $ be a probability measure for which (2) holds. Then there is a probability measure $ {\mathsf P} $ on $ ( \Omega , {\mathcal F} ) $, which is also unique, such that $ \nu $ is the compensator of $ \mu $ with respect to $ {\mathsf P} $ and such that the restriction of $ {\mathsf P} $ to $ {\mathcal F} _ {0} $ coincides with $ {\mathsf P} _ {0} $. The proof of this relies on an explicit formula relating the conditional distributions of the variables $ ( T _ {n} , Y _ {n} ) $ to the compensator, which in a number of cases has turned out to be a more convenient means of describing jump processes.

A jump process is a stochastic process with independent increments if and only if the corresponding compensator is determinate.

References

[1] A.N. [A.N. Kolmogorov] Kolmogoroff, "Ueber die analytischen Methoden in der Wahrscheinlichkeitstheorie" Math. Ann. , 104 (1931) pp. 415–458
[2] I.I. [I.I. Gikhman] Gihman, A.V. [A.V. Skorokhod] Skorohod, "The theory of stochastic processes" , 2 , Springer (1975) pp. Chapt. 3 (Translated from Russian)
[3] J. Jacod, "Calcul stochastique et problèmes de martingales" , Lect. notes in math. , 714 , Springer (1979)

Comments

References

[a1] E.B. Dynkin, "Markov processes", I, Springer (1965) pp. Chapt. 3 (Translated from Russian)
[a2] W. Feller, "An introduction to probability theory and its applications", 2, Wiley (1966) pp. Chapt. X
[a3] M. Rosenblatt, "Random processes", Springer (1974)
[a4] L.P. Breiman, "Probability", Addison-Wesley (1968)
How to Cite This Entry:
Jump process. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Jump_process&oldid=25931
This article was adapted from an original article by Yu.M. Kabanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article