Namespaces
Variants
Actions

Difference between revisions of "Sample function"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
s0831501.png
 +
$#A+1 = 74 n = 0
 +
$#C+1 = 74 : ~/encyclopedia/old_files/data/S083/S.0803150 Sample function,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''sample path''
 
''sample path''
  
A function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s0831501.png" /> of an argument <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s0831502.png" /> which unambiguously corresponds to each observation of a random process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s0831503.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s0831504.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s0831505.png" /> is a set of elementary events. The terms  "realization of a random processrealization"  and  "trajectory of a random processtrajectory" , which are equivalent to  "sample function"  and  "sample path" , are also frequently employed. A random process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s0831506.png" /> is characterized by a probability measure in the space of the sample function. In studying the local properties of the sample function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s0831507.png" /> (where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s0831508.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s0831509.png" /> is the Euclidean space of dimension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315010.png" />) it is assumed that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315011.png" /> is a separable random process or that an equivalent random process with given local properties of the sample function can be found. The local properties of the sample functions of Gaussian processes (cf. [[Gaussian process|Gaussian process]]) have been most extensively studied.
+
A function $  X _ {t} = X _ {t} ( \omega ) $
 +
of an argument $  t $
 +
which unambiguously corresponds to each observation of a random process $  X _ {t} \in E $,  
 +
$  t \in T $,  
 +
where $  \{ \omega \} = \Omega $
 +
is a set of elementary events. The terms  "realization of a random processrealization"  and  "trajectory of a random processtrajectory" , which are equivalent to  "sample function"  and  "sample path" , are also frequently employed. A random process $  X _ {t} $
 +
is characterized by a probability measure in the space of the sample function. In studying the local properties of the sample function $  X _ {t} $(
 +
where $  E = \mathbf R  ^ {1} $,  
 +
and $  T = \mathbf R  ^ {m} $
 +
is the Euclidean space of dimension $  m = 1, 2 , . . . $)  
 +
it is assumed that $  X _ {t} $
 +
is a separable random process or that an equivalent random process with given local properties of the sample function can be found. The local properties of the sample functions of Gaussian processes (cf. [[Gaussian process|Gaussian process]]) have been most extensively studied.
  
For Gaussian random processes (fields) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315012.png" /> the following holds: Almost all sample functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315013.png" /> are either continuous or unbounded over some interval. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315014.png" /> a  "distance"  is defined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315015.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315016.png" /> is a  "ball" , and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315017.png" /> is the minimum number of such  "balls"  which cover <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315018.png" />, further <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315019.png" />. A necessary and sufficient condition for the continuity of the sample function of a homogeneous Gaussian process has the form
+
For Gaussian random processes (fields) $  X _ {t} $
 +
the following holds: Almost all sample functions $  X _ {t} $
 +
are either continuous or unbounded over some interval. For $  t, s \in T $
 +
a  "distance"  is defined by $  d ( t, s) = [ {\mathsf E} | X _ {t} - X _ {s} |  ^ {2} ]  ^ {1/2} $,
 +
$  B ( t, \delta ) = \{ {s } : {d ( s, t) \leq  \delta } \} $
 +
is a  "ball" , and $  N ( \delta ) $
 +
is the minimum number of such  "balls"  which cover $  T \subset  \mathbf R  ^ {n} $,
 +
further  $  \sup  _ {s , t \in T }  d( s, t) < \infty $.  
 +
A necessary and sufficient condition for the continuity of the sample function of a homogeneous Gaussian process has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315020.png" /></td> </tr></table>
+
$$
 +
\exists q > 1 : \sum q ^ {- n } \sqrt { \mathop{\rm ln}  N ( q ^ {- n } ) }  < \infty .
 +
$$
  
 
If
 
If
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315021.png" /></td> </tr></table>
+
$$
 +
R ( t)  = {\mathsf E} X _ {s} X _ {t+} s  = \
 +
\int\limits _ {- \infty } ^  \infty 
 +
e ^ {it \lambda } \
 +
dF ( \lambda ),\  {\mathsf E} X _ {t}  = 0,
 +
$$
 +
 
 +
is concave in some neighbourhood of the point  $  0+ $,
 +
then for the sample function  $  X _ {t} $
 +
to be continuous it is necessary and sufficient that  $  \sum S _ {n}  ^ {1/2} < \infty $,
 +
where  $  S _ {n} = F( 2  ^ {n+} 1 ) - F( 2  ^ {n} ) $.  
 +
If  $  R $
 +
is concave in a neighbourhood of  $  0+ $
 +
and if
 +
 
 +
$$
 +
{\mathsf E} | X _ {t} - X _ {s} |  ^ {2}  \geq  \
 +
 
 +
\frac{C}{|  \mathop{\rm ln}  | t - s | | }
  
is concave in some neighbourhood of the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315022.png" />, then for the sample function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315023.png" /> to be continuous it is necessary and sufficient that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315024.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315025.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315026.png" /> is concave in a neighbourhood of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315027.png" /> and if
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315028.png" /></td> </tr></table>
+
for  $  | t - s | < \delta $,
 +
almost all sample functions of the Gaussian random process  $  X _ {t} $
 +
are unbounded. If
  
for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315029.png" />, almost all sample functions of the Gaussian random process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315030.png" /> are unbounded. If
+
$$
 +
{\mathsf E} | X _ {t} - X _ {s} |  ^ {2}  \leq  \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315031.png" /></td> </tr></table>
+
\frac{C}{|  \mathop{\rm ln}  | t - s | | ^ {1 + \epsilon } }
 +
,\ \
 +
\epsilon > 0,
 +
$$
  
almost all sample functions of the Gaussian random process (field) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315032.png" /> are continuous. For the sample function of a Gaussian random process to be continuous it is necessary and sufficient that
+
almost all sample functions of the Gaussian random process (field) $  X _ {t} $
 +
are continuous. For the sample function of a Gaussian random process to be continuous it is necessary and sufficient that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315033.png" /></td> </tr></table>
+
$$
 +
\int\limits _ { 0 } ^  \infty  \omega _ {R} ( e ^ {- x  ^ {2} } )  dx  < \infty ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315034.png" />,
+
where $  R ( t, s) = {\mathsf E} X _ {t} X _ {s} $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315035.png" /></td> </tr></table>
+
$$
 +
\omega _ {k} ( \delta )  = \sup \
 +
[ R ( t + h _ {1} , s + h _ {2} ) - R ( t, s)]  ^ {1/2} .
 +
$$
  
Here, the supremum is taken over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315036.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315037.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315038.png" />. The sample function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315039.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315040.png" />, is in the class <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315041.png" /> if for all sufficiently small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315042.png" />,
+
Here, the supremum is taken over $  | h _ {i} | < \delta $,  
 +
$  | t | \leq  C $,  
 +
$  | s | \leq  C $.  
 +
The sample function $  X _ {t} $,  
 +
$  t \in \mathbf R  ^ {n} $,  
 +
is in the class $  H( C, \alpha _ {1} \dots \alpha _ {n} ) $
 +
if for all sufficiently small $  h _ {i} $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315043.png" /></td> </tr></table>
+
$$
 +
| X _ {t+} h - X _ {t} |  \leq  \
 +
C \sum _ {i = 1 } ^ { n }  | h _ {i} | ^ {\alpha _ {i} } ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315044.png" /></td> </tr></table>
+
$$
 +
> 0,\  0  < \alpha _ {i}  \leq  1,\  h  = ( h _ {1} \dots h _ {n} ).
 +
$$
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315045.png" /> is a Gaussian random field on the unit cube <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315046.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315047.png" /> such that for sufficiently small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315048.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315049.png" />,
+
If $  \xi _ {t} $
 +
is a Gaussian random field on the unit cube $  V _ {n}  ^ {0} $
 +
in $  \mathbf R  ^ {n} $
 +
such that for sufficiently small $  h $
 +
and $  t \in V _ {n}  ^ {0} $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315050.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} | X _ {t+} h - X _ {t} |  ^ {2}  \leq  \
 +
C _ {1}
 +
\frac{| h |  ^  \gamma  }{|  \mathop{\rm ln}  | h | | }
 +
,\ \
 +
C _ {1} > 0,\  0 < \gamma \leq  2,
 +
$$
  
then, with probability one, uniformly in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315051.png" />,
+
then, with probability one, uniformly in $  t \in V _ {n}  ^ {0} $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315052.png" /></td> </tr></table>
+
$$
 +
X _ {t}  \in  H ( C, \beta _ {1} \dots \beta _ {n} )
 +
$$
  
for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315053.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315054.png" />.
+
for any $  C > 0 $
 +
and $  \beta _ {i} \leq  \gamma /2 $.
  
A non-decreasing continuous function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315055.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315056.png" />, is called an upper function if for almost all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315057.png" /> there exists an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315058.png" /> such that
+
A non-decreasing continuous function $  \phi ( x) $,  
 +
$  x \in \mathbf R  ^ {1} $,  
 +
is called an upper function if for almost all $  \omega $
 +
there exists an $  \epsilon = \epsilon ( \omega ) $
 +
such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315059.png" /></td> </tr></table>
+
$$
 +
| X _ {t} - X _ {s} |  \leq  ( {\mathsf E} | X _ {t} - X _ {s} |  ^ {2} )  ^ {1/2}
 +
\phi \left (
 +
\frac{1}{| t - s | }
 +
\right )
 +
$$
  
for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315060.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315061.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315062.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315063.png" /> is a Gaussian random field with
+
for $  | t - s | \leq  \epsilon $;  
 +
$  t, s \in \mathbf R  ^ {n} $;  
 +
$  | t | = ( \sum _ {i=} 1  ^ {n} t _ {i}  ^ {2} )  ^ {1/2} $.  
 +
If $  X _ {t} $
 +
is a Gaussian random field with
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315064.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} X _ {t}  = 0,\ \
 +
{\mathsf E} X _ {t} X _ {s}  = {
 +
\frac{1}{2}
 +
}
 +
( | t |  ^  \alpha  + | s |  ^  \alpha  - | t - s |  ^  \alpha  ),\ \
 +
0 < \alpha \leq  1 ,
 +
$$
  
then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315065.png" /> is an upper function if and only if
+
then $  \phi ( x) $
 +
is an upper function if and only if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315066.png" /></td> </tr></table>
+
$$
 +
\int\limits _ { e } ^  \infty  t  ^ {n-} 1 K [ \phi ( t)]  dt  < \infty ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315067.png" /></td> </tr></table>
+
$$
 +
K [ x]  = x ^ {( 4n/ \alpha ) - 1 } e ^ {- x  ^ {2} /2 } .
 +
$$
  
For almost all sample functions of a Gaussian random process to be analytic in a neighbourhood of a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315068.png" /> it is necessary and sufficient that the covariance function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315069.png" /> be analytic in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315070.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315071.png" /> in a neighbourhood <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315072.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315073.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s083/s083150/s08315074.png" />.
+
For almost all sample functions of a Gaussian random process to be analytic in a neighbourhood of a point $  t _ {0} $
 +
it is necessary and sufficient that the covariance function $  R( t, s) $
 +
be analytic in $  t $
 +
and s $
 +
in a neighbourhood $  | t - t _ {0} | < \delta $,
 +
$  | s - t _ {0} | < \delta $,  
 +
$  \delta > 0 $.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J.L. Doob,  "Stochastic processes" , Chapman &amp; Hall  (1953)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  H. Cramér,  M.R. Leadbetter,  "Stationary and related stochastic processes" , Wiley  (1967)  pp. Chapts. 33–34</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  Yu.K. Belyaev,  "Continuity and Hölder's conditions for sample functions of stationary Gaussian processes" , ''Proc. 4-th Berkeley Symp. Math. Stat. Probab.'' , '''2''' , Univ. California Press  (1961)  pp. 23–33</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  E.I. Ostrovskii,  "On the local structure of Gaussian fields"  ''Soviet Math. Dokl.'' , '''11''' :  6  (1970)  pp. 1425–1427  ''Dokl. Akad. Nauk SSSR'' , '''195''' :  1  (1970)  pp. 40–42</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  M. Nisio,  "On the continuity of stationary Gaussian processes"  ''Nagoya Math. J.'' , '''34'''  (1969)  pp. 89–104</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  R.M. Dudley,  "Gaussian processes on several parameters"  ''Ann. of Math. Statist.'' , '''36''' :  3  (1965)  pp. 771–788</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  X. Fernique,  "Continuité des processus Gaussiens"  ''C.R. Acad. Sci. Paris Sér. I Math.'' , '''258'''  (1964)  pp. 6058–6060</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  M.I. Yadrenko,  "Local properties of sample functions of random fields"  ''Visnik Kiiv. Univ. Ser. Mat. Mekh.'' , '''9'''  (1967)  pp. 103–112  (In Ukrainian)  (English abstract)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  T. Kawada,  "On the upper and lower class for Gaussian processes with several parameters"  ''Nagoya Math. J.'' , '''35'''  (1969)  pp. 109–132</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  Yu.K. Belyaev,  "Analytical random processes"  ''Theory Probab. Appl.'' , '''4''' :  4  (1959)  pp. 402–409  ''Teor. Veroyatnost. i Primenen.'' , '''4''' :  4  (1959)  pp. 437–444</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  E.E. Slutskii,  "Qualche proposizione relativa alla teoria delle funzioni aluatorie"  ''Giorn. Inst. Ital. Attuari'' , '''8''' :  2  (1937)  pp. 183–199</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  X.M. Fernique,  "Regularité de trajectoires des fonctions aleatoires gaussiennes"  J.P. Conze (ed.)  J. Cani (ed.)  X.M. Fernique (ed.) , ''Ecole d'Ete de Probabilité de Saint-Flour IV-1974'' , Springer  (1975)  pp. 1–96</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J.L. Doob,  "Stochastic processes" , Chapman &amp; Hall  (1953)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  H. Cramér,  M.R. Leadbetter,  "Stationary and related stochastic processes" , Wiley  (1967)  pp. Chapts. 33–34</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  Yu.K. Belyaev,  "Continuity and Hölder's conditions for sample functions of stationary Gaussian processes" , ''Proc. 4-th Berkeley Symp. Math. Stat. Probab.'' , '''2''' , Univ. California Press  (1961)  pp. 23–33</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  E.I. Ostrovskii,  "On the local structure of Gaussian fields"  ''Soviet Math. Dokl.'' , '''11''' :  6  (1970)  pp. 1425–1427  ''Dokl. Akad. Nauk SSSR'' , '''195''' :  1  (1970)  pp. 40–42</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  M. Nisio,  "On the continuity of stationary Gaussian processes"  ''Nagoya Math. J.'' , '''34'''  (1969)  pp. 89–104</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  R.M. Dudley,  "Gaussian processes on several parameters"  ''Ann. of Math. Statist.'' , '''36''' :  3  (1965)  pp. 771–788</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  X. Fernique,  "Continuité des processus Gaussiens"  ''C.R. Acad. Sci. Paris Sér. I Math.'' , '''258'''  (1964)  pp. 6058–6060</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  M.I. Yadrenko,  "Local properties of sample functions of random fields"  ''Visnik Kiiv. Univ. Ser. Mat. Mekh.'' , '''9'''  (1967)  pp. 103–112  (In Ukrainian)  (English abstract)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  T. Kawada,  "On the upper and lower class for Gaussian processes with several parameters"  ''Nagoya Math. J.'' , '''35'''  (1969)  pp. 109–132</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  Yu.K. Belyaev,  "Analytical random processes"  ''Theory Probab. Appl.'' , '''4''' :  4  (1959)  pp. 402–409  ''Teor. Veroyatnost. i Primenen.'' , '''4''' :  4  (1959)  pp. 437–444</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  E.E. Slutskii,  "Qualche proposizione relativa alla teoria delle funzioni aluatorie"  ''Giorn. Inst. Ital. Attuari'' , '''8''' :  2  (1937)  pp. 183–199</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  X.M. Fernique,  "Regularité de trajectoires des fonctions aleatoires gaussiennes"  J.P. Conze (ed.)  J. Cani (ed.)  X.M. Fernique (ed.) , ''Ecole d'Ete de Probabilité de Saint-Flour IV-1974'' , Springer  (1975)  pp. 1–96</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  R.J. Alder,  "The geometry of random fields" , Wiley  (1981)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  R.J. Alder,  "The geometry of random fields" , Wiley  (1981)</TD></TR></table>

Latest revision as of 08:12, 6 June 2020


sample path

A function $ X _ {t} = X _ {t} ( \omega ) $ of an argument $ t $ which unambiguously corresponds to each observation of a random process $ X _ {t} \in E $, $ t \in T $, where $ \{ \omega \} = \Omega $ is a set of elementary events. The terms "realization of a random processrealization" and "trajectory of a random processtrajectory" , which are equivalent to "sample function" and "sample path" , are also frequently employed. A random process $ X _ {t} $ is characterized by a probability measure in the space of the sample function. In studying the local properties of the sample function $ X _ {t} $( where $ E = \mathbf R ^ {1} $, and $ T = \mathbf R ^ {m} $ is the Euclidean space of dimension $ m = 1, 2 , . . . $) it is assumed that $ X _ {t} $ is a separable random process or that an equivalent random process with given local properties of the sample function can be found. The local properties of the sample functions of Gaussian processes (cf. Gaussian process) have been most extensively studied.

For Gaussian random processes (fields) $ X _ {t} $ the following holds: Almost all sample functions $ X _ {t} $ are either continuous or unbounded over some interval. For $ t, s \in T $ a "distance" is defined by $ d ( t, s) = [ {\mathsf E} | X _ {t} - X _ {s} | ^ {2} ] ^ {1/2} $, $ B ( t, \delta ) = \{ {s } : {d ( s, t) \leq \delta } \} $ is a "ball" , and $ N ( \delta ) $ is the minimum number of such "balls" which cover $ T \subset \mathbf R ^ {n} $, further $ \sup _ {s , t \in T } d( s, t) < \infty $. A necessary and sufficient condition for the continuity of the sample function of a homogeneous Gaussian process has the form

$$ \exists q > 1 : \sum q ^ {- n } \sqrt { \mathop{\rm ln} N ( q ^ {- n } ) } < \infty . $$

If

$$ R ( t) = {\mathsf E} X _ {s} X _ {t+} s = \ \int\limits _ {- \infty } ^ \infty e ^ {it \lambda } \ dF ( \lambda ),\ {\mathsf E} X _ {t} = 0, $$

is concave in some neighbourhood of the point $ 0+ $, then for the sample function $ X _ {t} $ to be continuous it is necessary and sufficient that $ \sum S _ {n} ^ {1/2} < \infty $, where $ S _ {n} = F( 2 ^ {n+} 1 ) - F( 2 ^ {n} ) $. If $ R $ is concave in a neighbourhood of $ 0+ $ and if

$$ {\mathsf E} | X _ {t} - X _ {s} | ^ {2} \geq \ \frac{C}{| \mathop{\rm ln} | t - s | | } $$

for $ | t - s | < \delta $, almost all sample functions of the Gaussian random process $ X _ {t} $ are unbounded. If

$$ {\mathsf E} | X _ {t} - X _ {s} | ^ {2} \leq \ \frac{C}{| \mathop{\rm ln} | t - s | | ^ {1 + \epsilon } } ,\ \ \epsilon > 0, $$

almost all sample functions of the Gaussian random process (field) $ X _ {t} $ are continuous. For the sample function of a Gaussian random process to be continuous it is necessary and sufficient that

$$ \int\limits _ { 0 } ^ \infty \omega _ {R} ( e ^ {- x ^ {2} } ) dx < \infty , $$

where $ R ( t, s) = {\mathsf E} X _ {t} X _ {s} $,

$$ \omega _ {k} ( \delta ) = \sup \ [ R ( t + h _ {1} , s + h _ {2} ) - R ( t, s)] ^ {1/2} . $$

Here, the supremum is taken over $ | h _ {i} | < \delta $, $ | t | \leq C $, $ | s | \leq C $. The sample function $ X _ {t} $, $ t \in \mathbf R ^ {n} $, is in the class $ H( C, \alpha _ {1} \dots \alpha _ {n} ) $ if for all sufficiently small $ h _ {i} $,

$$ | X _ {t+} h - X _ {t} | \leq \ C \sum _ {i = 1 } ^ { n } | h _ {i} | ^ {\alpha _ {i} } , $$

$$ C > 0,\ 0 < \alpha _ {i} \leq 1,\ h = ( h _ {1} \dots h _ {n} ). $$

If $ \xi _ {t} $ is a Gaussian random field on the unit cube $ V _ {n} ^ {0} $ in $ \mathbf R ^ {n} $ such that for sufficiently small $ h $ and $ t \in V _ {n} ^ {0} $,

$$ {\mathsf E} | X _ {t+} h - X _ {t} | ^ {2} \leq \ C _ {1} \frac{| h | ^ \gamma }{| \mathop{\rm ln} | h | | } ,\ \ C _ {1} > 0,\ 0 < \gamma \leq 2, $$

then, with probability one, uniformly in $ t \in V _ {n} ^ {0} $,

$$ X _ {t} \in H ( C, \beta _ {1} \dots \beta _ {n} ) $$

for any $ C > 0 $ and $ \beta _ {i} \leq \gamma /2 $.

A non-decreasing continuous function $ \phi ( x) $, $ x \in \mathbf R ^ {1} $, is called an upper function if for almost all $ \omega $ there exists an $ \epsilon = \epsilon ( \omega ) $ such that

$$ | X _ {t} - X _ {s} | \leq ( {\mathsf E} | X _ {t} - X _ {s} | ^ {2} ) ^ {1/2} \phi \left ( \frac{1}{| t - s | } \right ) $$

for $ | t - s | \leq \epsilon $; $ t, s \in \mathbf R ^ {n} $; $ | t | = ( \sum _ {i=} 1 ^ {n} t _ {i} ^ {2} ) ^ {1/2} $. If $ X _ {t} $ is a Gaussian random field with

$$ {\mathsf E} X _ {t} = 0,\ \ {\mathsf E} X _ {t} X _ {s} = { \frac{1}{2} } ( | t | ^ \alpha + | s | ^ \alpha - | t - s | ^ \alpha ),\ \ 0 < \alpha \leq 1 , $$

then $ \phi ( x) $ is an upper function if and only if

$$ \int\limits _ { e } ^ \infty t ^ {n-} 1 K [ \phi ( t)] dt < \infty , $$

where

$$ K [ x] = x ^ {( 4n/ \alpha ) - 1 } e ^ {- x ^ {2} /2 } . $$

For almost all sample functions of a Gaussian random process to be analytic in a neighbourhood of a point $ t _ {0} $ it is necessary and sufficient that the covariance function $ R( t, s) $ be analytic in $ t $ and $ s $ in a neighbourhood $ | t - t _ {0} | < \delta $, $ | s - t _ {0} | < \delta $, $ \delta > 0 $.

References

[1] J.L. Doob, "Stochastic processes" , Chapman & Hall (1953)
[2] H. Cramér, M.R. Leadbetter, "Stationary and related stochastic processes" , Wiley (1967) pp. Chapts. 33–34
[3] Yu.K. Belyaev, "Continuity and Hölder's conditions for sample functions of stationary Gaussian processes" , Proc. 4-th Berkeley Symp. Math. Stat. Probab. , 2 , Univ. California Press (1961) pp. 23–33
[4] E.I. Ostrovskii, "On the local structure of Gaussian fields" Soviet Math. Dokl. , 11 : 6 (1970) pp. 1425–1427 Dokl. Akad. Nauk SSSR , 195 : 1 (1970) pp. 40–42
[5] M. Nisio, "On the continuity of stationary Gaussian processes" Nagoya Math. J. , 34 (1969) pp. 89–104
[6] R.M. Dudley, "Gaussian processes on several parameters" Ann. of Math. Statist. , 36 : 3 (1965) pp. 771–788
[7] X. Fernique, "Continuité des processus Gaussiens" C.R. Acad. Sci. Paris Sér. I Math. , 258 (1964) pp. 6058–6060
[8] M.I. Yadrenko, "Local properties of sample functions of random fields" Visnik Kiiv. Univ. Ser. Mat. Mekh. , 9 (1967) pp. 103–112 (In Ukrainian) (English abstract)
[9] T. Kawada, "On the upper and lower class for Gaussian processes with several parameters" Nagoya Math. J. , 35 (1969) pp. 109–132
[10] Yu.K. Belyaev, "Analytical random processes" Theory Probab. Appl. , 4 : 4 (1959) pp. 402–409 Teor. Veroyatnost. i Primenen. , 4 : 4 (1959) pp. 437–444
[11] E.E. Slutskii, "Qualche proposizione relativa alla teoria delle funzioni aluatorie" Giorn. Inst. Ital. Attuari , 8 : 2 (1937) pp. 183–199
[12] X.M. Fernique, "Regularité de trajectoires des fonctions aleatoires gaussiennes" J.P. Conze (ed.) J. Cani (ed.) X.M. Fernique (ed.) , Ecole d'Ete de Probabilité de Saint-Flour IV-1974 , Springer (1975) pp. 1–96

Comments

References

[a1] R.J. Alder, "The geometry of random fields" , Wiley (1981)
How to Cite This Entry:
Sample function. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Sample_function&oldid=48608
This article was adapted from an original article by Yu.K. Belyaev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article