Namespaces
Variants
Actions

Difference between revisions of "Random walk"

From Encyclopedia of Mathematics
Jump to: navigation, search
m
(latex details)
 
(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
 +
<!--
 +
r0773801.png
 +
$#A+1 = 144 n = 0
 +
$#C+1 = 144 : ~/encyclopedia/old_files/data/R077/R.0707380 Random walk
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 +
A [[Stochastic process|stochastic process]] of special form that can be interpreted as a model describing the movement of a particle in a certain state space under the action of some random mechanism. The state space is usually a  $  d $-
 +
dimensional Euclidean space or the integral lattice in it. The random mechanism can be of various kinds; the most common random walks are those generated by summation of independent random variables or by Markov chains. There is no precise generally-accepted definition of a random walk.
 +
 +
The trajectories of the simplest random walk in the case  $  d = 1 $
 +
are described by the initial position  $  S _ {0} = 0 $
 +
and the sequence of sums
 +
 +
$$ \tag{1 }
 +
S _ {n}  =  X _ {1} + \dots + X _ {n} ,\  n = 1 , 2 \dots
 +
$$
 +
 +
where the  $  X _ {i} $
 +
are independent and have a Bernoulli distribution:
  
{| class="wikitable"
+
$$
!Copyright notice <!-- don't remove! -->
+
{\mathsf P} \{ X _ {i} = 1 \}  =  p ,\ \
|-
+
{\mathsf P} \{ X _ {i} = - 1 \}
| This article ''Random Walk'' was adapted from an original article by Rabi Bhattacharya, which appeared in ''StatProb: The Encyclopedia Sponsored by Statistics and Probability Societies''. The original article ([<nowiki>http://statprob.com/encyclopedia/RandomWalk.html</nowiki> StatProb Source], Local Files: [[Media:random_walk.pdf|pdf]] | [[Media:random_walk.tex|tex]]) is copyrighted by the author(s), the article has been donated to ''Encyclopedia of Mathematics'', and its further issues are under ''Creative Commons Attribution Share-Alike License'. All pages from StatProb are contained in the [[:Category:Statprob|Category StatProb]].
+
=  q  =  1 - p ,\  p \in ( 0 , 1 ) .
|-
+
$$
|}
 
  
{{MSC|60G50}}
+
The value of  $  S _ {n} $
 +
can be interpreted as the gain of one of two players after  $  n $
 +
games in each of which this player wins one dollar with probability  $  p $
 +
and loses it with probability  $  1 - p $.
 +
If the game consists of tossing an unbiased coin, then it is assumed that  $  p = 1/2 $(
 +
a symmetric walk, see [[Bernoulli random walk|Bernoulli random walk]]). Under the assumption that the initial capital of the first player is  $  b $
 +
and that of the second player is  $  a $,
 +
the game will finish when the moving particle (with coordinates  $  S _ {1} , S _ {2} ,\dots $)
 +
first touches one of the levels  $  a $
 +
or  $  - b $.
 +
At this moment, one of the players is ruined. This is the classical ruin problem, in which the boundaries at points  $  a $
 +
and  $  - b $
 +
can be regarded as absorbing.
  
 +
In applications to [[Queueing theory|queueing theory]], the behaviour of the particle near the boundaries  $  a $
 +
and  $  - b $
 +
can differ: e.g., if  $  a = \infty $
 +
and  $  b = 0 $,
 +
then the position  $  Z _ {n+1} $
 +
of the random particle at the moment  $  n + 1 $
 +
is given by
  
<!-- \documentclass[10pt]{article}  
+
$$ \tag{2 }
    \usepackage{amssymb}  
+
Z _ {n+1}  =  \max ( 0 , Z _ {n} + X _ {n+1} ) ,
    \usepackage{amsmath}  
+
$$
    \usepackage{amsfonts}  
+
 
    \begin{document} -->
+
and the boundary at  $  0 $
-->
+
can be called detaining or reflecting. There are also other possibilities for the behaviour of the particle in a neighbourhood of the boundaries.
<center>
+
 
\mathbf{Random Walk}
+
If  $  a = \infty $,
 +
then one obtains the problem of a random walk with one boundary. If  $  a = b = \infty $,
 +
then one obtains an unrestricted random walk. Random walks are usually studied using the apparatus of discrete Markov chains and, in particular, by investigating the corresponding finite-difference equations. For example, in the ruin problem, let  $  u _ {k} $
 +
be the probability that the first player is ruined, given that his initial capital is equal to  $  k $,
 +
$  0 \leq  k \leq  a + b $,
 +
where the total capital of both players is fixed and equal to  $  a + b $.
 +
Then, by the total probability formula (at the first jump), it follows that  $  u _ {k} $
 +
satisfies the equation
 +
 
 +
$$
 +
u _ {k}  =  p u _ {k+1} + qu _ {k-1} ,\  0 < k < a + b ,
 +
$$
 +
 
 +
and the boundary conditions  $  u _ {a+b} = 0 $,
 +
$  u _ {0} = 1 $.
 +
Thus one obtains
 +
 
 +
$$
 +
u _ {b}  = 
 +
\frac{\left (
 +
\frac{q}{p}
 +
\right )
 +
^ {a+b} - \left (
 +
\frac{q}{p}
 +
\right )  ^ {b} }{\left (
 +
\frac{q}{p}
 +
\right )  ^ {a+b - 1} }
 +
\ \
 +
\textrm{ when }  p \neq q ;
 +
$$
 +
 
 +
$$
 +
u _ {b}  = 
 +
\frac{a}{a+b} \  \textrm{ when }  p = q =
 +
\frac{1}{2}
 +
.
 +
$$
 +
 
 +
The second of these formulas shows that even a  "fair"  game (in which both players have identical chances) leads to ruin with probability close to 1, provided that the capital  $  a $
 +
of the second player is large in comparison to  $  b $(
 +
$  u _ {b} = 1 $
 +
when  $  b < \infty $,
 +
$  a = \infty $).
 +
 
 +
The ruin problem has been thoroughly investigated. For example, it was shown by J. Lagrange (see [[#References|[1]]], Vol. 1) that the probability  $  u _ {b,n} $
 +
of ruin of the first player at the  $  n $-
 +
th step, given that this initial capital is  $  b $,
 +
where the total capital is  $  a+ b $(
 +
fixed), is equal to
 +
 
 +
$$
 +
u _ {b,n}  =  ( a + b )  ^ {-} 1 2  ^ {n}
 +
p ^ {( n- b )/ 2 } q ^ {( n+ b )/ 2 } \times
 +
$$
 +
 
 +
$$
 +
\times
 +
\sum_{k=1}^ {a+b-1} \cos  ^ {n-1}  \frac{\pi k }{a+b}  \sin  \frac{\pi k }{a+b}  \sin  \frac{\pi b k }{a+b} .
 +
$$
 +
 
 +
The mean time before ruin of one of the players,  $  m _ {b} $,
 +
is given by
 +
 
 +
$$
 +
m _ {b}  = 
 +
\frac{b}{q-}
 +
p - a+
 +
\frac{b}{q-}
 +
p
 +
 
 +
\frac{1 - \left (
 +
\frac{q}{p}
 +
\right )  ^ {b} }{1 - \left (
 +
\frac{q}{p}
 +
\right )  ^ {a+b} }
 +
\  \textrm{ if } \
 +
p \neq q ;
 +
$$
 +
 
 +
$$
 +
m _ {b}  =  a b \  \textrm{ if }  p = q =
 +
\frac{1}{2}
 +
.
 +
$$
 +
 
 +
For random walks with one boundary  $  ( a = \infty ) $,
 +
described by (2), there is a stationary distribution for the random walk when  $  p < q $
 +
and  $  n \rightarrow \infty $,
 +
coinciding with the distribution of the random variable  $  S = \sup _ {k \geq  0 }  S _ {k} $
 +
and
 +
 
 +
$$ \tag{3 }
 +
{\mathsf P} \{ S \geq  k \} = \
 +
\left (
 +
\frac{p}{q}
 +
\right )  ^ {k} ,\ \
 +
k = 0 , 1 ,\dots .
 +
$$
 +
 
 +
The laws describing an unrestricted random walk follow from theorems about the behaviour of the sequence of partial sums  $  S _ {n} $,
 +
$  n = 1 , 2 ,\dots $.
 +
One of these laws confirms that for a symmetric random walk  $  ( p = 1/2 ) $,
 +
the particle hits (infinitely often) any fixed point  $  a $
 +
with probability 1. When  $  p < 1/2 $,
 +
the walk departs to the left with probability 1; in this case  $  S < \infty $
 +
with probability 1.
 +
 
 +
For a symmetric random walk, the time  $  K _ {n} $
 +
spent by the particle on the positive half-line (the number of positive terms in the sequence  $  S _ {1} \dots S _ {n} $)
 +
will be closer to  $  0 $
 +
or  $  n $
 +
than to  $  n / 2 $
 +
with a large probability. This is seen from the so-called [[Arcsine law|arcsine law]], which implies that for large  $  n $(
 +
see [[#References|[1]]], Vol. 1),
 +
 
 +
$$
 +
{\mathsf P} \left \{
 +
\frac{K _ n}{n}
 +
 
 +
< x \right \}  \approx 
 +
\frac{2} \pi
 +
 
 +
{ \mathop{\rm arc}  \sin }  \sqrt x .
 +
$$
 +
 
 +
This implies, for example, that
 +
 
 +
$$
 +
{\mathsf P} \left \{ \left |
 +
\frac{K _ {n} }{n}
 +
-
 +
 
 +
\frac{1}{2}
 +
\right | <
 +
\frac{1}{4}
 +
\right \}  = \
 +
1 - 2 {\mathsf P} \left \{
 +
\frac{K _ {n} }{n}
 +
 
 +
<  
 +
\frac{1}{4}
 +
\right \}  \approx \
 +
1 -
 +
\frac{4} \pi
 +
 +
\frac \pi {6}
 +
  = 
 +
\frac{1}{3}
  
</center>
+
$$
  
<center>
+
and that with probability  $  0 .2 $,
Rabi Bhattacharya, The University of Arizona, USA
+
the particle spends at least  $  97.6 \pct $
</center>
+
of the whole time on one side.
  
The simple random walk $\{S_n: n=0,1,...\}$, starting at an integer $x$, is a stochastic process on the integers, given by $S_0=x$, $S_n = x +X_1+...+X_n (n\geq1)$, where $X_n, n\geq1$, is an independent Bernoulli sequence: $P(X_n=1) =p$, $P(X_n=-1) =1-p =q$, $0<p<1$. In the case, $p=q=1/2$, it is called the ''simple symmetric random walk'', while if $p\neq1/2$, it is ''asymmetric''. By the binomial theorem, $P( S_n = y\mid S_0=0) = C_{\frac{(n+y)}{2}}^{n} p^{(n+y)/2} q^{(n-y)/2}$, if $y$ and $n$ are of the same parity, i.e., if either both are odd or both are even. Otherwise, $ P( S_n = y\mid S_0=0) =0$. Here $= C_m^n = n!/(m!(n-m)!)$.
+
There are relations connecting random walks with boundaries with unrestricted random walks. For example, if one assumes that  $ Y ( x) = \min \{ {k } : {S _ {k} \geq  x } \} $,  
 +
then (see [[#References|[2]]])
  
 +
$$
 +
{\mathsf P} \{ Y ( x) = n \}  = \
  
For $c\leq x\leq d$ integers, the probability $\pi(x)$ that a simple random walk, starting at $x$, reaches $c$ before $d$ satisfies the equation
+
\frac{x}{n}
 +
{\mathsf P} \{ S _ {n} = x \} .
 
$$
 
$$
\pi(x) = p\pi(x+1) + q\pi(x-1) \text{ for } c<x<d \text{ , } \pi (c) =1 \text{ , } \pi (d) =0 \text{ ,}
 
\tag{1}$$
 
  
as shown by conditioning on the first step $X_1$. For the symmetric walk, the solution of this equation is $\pi(x) = (d-x)/(d-c)$. Since $\pi(x) \rightarrow1$ as $d\rightarrow \infty$, the symmetric walk will reach the state $c$, starting from any state $x>c$, with probability one. By symmetry, it will reach every state with probability one. Iterating this argument one sees that, with probability one, the symmetric random walk visits
+
In an unrestricted random walk, the position  $ S _ {n} $
every state infinitely often. That is, the walk is ''recurrent''. For the asymmetric walk, the solution to $(1)$ is $\pi(x) = (1-(p/q)^{d-x})/ (1-(p/q)^{d-c})$. If $p<1/2$, then the limit of this is $1$ as $d\rightarrow \infty$ and, with probability one, the random walk will visit $c$, starting from $x>c$. On the other hand, if $p>1/2$, then $\pi(x)\rightarrow(q/p)^{x-c} <1$, as $d\rightarrow \infty$. The probability of ever reaching $d$, starting from $x<d$ is obtained by symmetry as $1$ if $p>1/2$ and $(p/q)^{d-x}$ if $p<1/2$. The asymmetric simple random walk is thus ''transient''. Indeed, it follows from the strong law of large numbers (''SLLN'') that if $p>1/2$, then $S_n\rightarrow \infty$ with probability one as $n\rightarrow \infty$; and $S_n \rightarrow - \infty$, with probability one, if $p<1/2$. For these and other properties of the random walk, such as those described below, see Feller (1968), Chapter 3, Bhattacharya and Waymire (2009), Chapter 1, or Durrett (1995), Chapter 3. For additional information, refer to Billingsley (1968), and Spitzer (1964).
+
of the particle for large  $ n $
 +
is described by the [[Law of large numbers|law of large numbers]] and the [[Central limit theorem|central limit theorem]].
 +
 
 +
If the magnitudes  $  \pm  1 $
 +
of the jumps are changed to $  \pm  \Delta $(
 +
for small  $ \Delta $)  
 +
and if one assumes that  $  p = ( 1 + \alpha \Delta ) / 2 $,  
 +
then the position  $ S _ {n} $
 +
of the particle after  $ n = t \Delta  ^ {-} 2 $
 +
steps will describe approximately (as $ \Delta \rightarrow 0 $)
 +
the behaviour at time  $  t $
 +
of the [[Diffusion process|diffusion process]] with drift  $ \alpha $,  
 +
diffusion coefficient 1, and corresponding behaviour on the boundaries  $ a $
 +
and $ - b $(
 +
if these are specified for the random walk $  S _ {n} $).
  
 +
The notion of a random walk as described above can be generalized in many ways. The simplest random walks in  $  \mathbf R  ^ {d} $
 +
for  $  d > 1 $
 +
are defined as follows. The particle leaves the origin and moves one step of length 1 in one of the  $  2 d $
 +
directions parallel to the coordinate axes. Thus, its possible positions are the points of  $  \mathbf R  ^ {d} $
 +
with integer coordinates. To define a random walk it is necessary to specify  $  2 d $
 +
probabilities corresponding to the different directions. One obtains a symmetric random walk if all these are equal to  $  1/2d $.
 +
In the multi-dimensional case, the problem of a random walk with boundaries becomes much more complicated, since the shape of the boundaries becomes essentially more complex when  $  d > 1 $.
 +
In the unrestricted case, Pólya's theorem holds (see [[#References|[1]]], Vol. 1): for a symmetric random walk when  $  d \leq  2 $,
 +
the particle will sooner or later return to its initial position with probability 1 (once, and thus infinitely often); but when  $  d= 3 $
 +
this probability is approximately equal to 0.35 (and when  $  d > 3 $
 +
it is still smaller).
  
For computation of various probabilities associated with a simple random walk, the following
+
Another possible generalization of the simplest random walk consists of taking arbitrarily distributed independent random variables  $ X _ {1} , X _ {2} ,\dots $
result proved by D. Andre in 1887 is very useful: Consider the polygonal path of the random walk joining successive points $(j,S_j)$, $(j+1,S_{j+1})$ $(j=0,1,...,n-1)$ by line segments. Let $y>0$. Then (a) the set of paths from $(0,0)$ to $(n,y-1)$ ($n$ and $y-1$ of the same parity) which touch or cross the level $y$, is in one-one correspondence with (b) the set of all paths from $(0,0)$ to $(n,y+1)$ (''Reflection principle''). To prove this, let $\tau$ be the first time a path of the type (a) touches the level $y$ prior to time $n$. Then replace the segment of the path from $(\tau,y)$ to $(n,y-1)$ by its mirror reflection about the level $y$. This gives a path of the type (b). Conversely, given any path of the type (b), reflect about y the segment of the path from $(\tau,y)$ to $(n,y+1)$. This gives a path of the type (a). Here is an application of this principle.
+
in (1). In this case, the basic qualitative laws for unrestricted random walks and for random walks with boundaries are preserved. For example, the particle reaches one of the boundaries  $ a $
 +
or  $ - b $
 +
with probability 1. If  $ {\mathsf E} X _ {i} \leq  0 $
 +
and $ a = \infty $,
 +
then the boundary  $ - b $
 +
will be reached with probability 1. If the $ X _ {i} $
 +
are integer valued and  $ {\mathsf E} X _ {i} = 0 $,
 +
then the particle will return to the initial position with probability 1. For arbitrarily distributed  $ X _ {i} $
 +
with  $ {\mathsf E} X _ {i} = 0 $,  
 +
this statement only holds in case one considers return to an interval, not to a point.
  
 +
The solution of problems concerned with the exit of random walks from an interval  $  ( - b , a ) $
 +
turns out to be much more difficult in the general case. At the same time, these problems have many applications in mathematical statistics (sequential analysis), in the insurance business, in queueing theory, etc. In their investigation, a decisive role is played by various functionals of a random walk  $  \{ S _ {n} \} $,
 +
$  n = 0 , 1 \dots $
 +
called boundary functionals. Among these are  $  S = \sup _ {k \geq  0 }  S _ {k} $,
 +
the time of the first passage through zero (in a positive direction),  $  Y _ {+} = \min \{ {k } : {S _ {k} > 0 } \} $,
 +
the time of the first passage through zero (in a negative direction),  $  Y _ {-} = \min \{ {k \geq  1 } : {S _ {k} \leq  0 } \} $,
 +
the first positive sum  $  \chi _ {+} = S _ {Y _ {+}  } $,
 +
and the first non-positive sum  $  \chi _ {-} = S _ {Y _ {-}  } $,
 +
etc. It turns out that the distribution of the jumps  $  X _ {i} $
 +
is connected with the distribution of these functionals by the following so-called factorization identity (see [[#References|[1]]], Vol. 1; [[#References|[2]]], [[#References|[3]]]; see also [[Factorization identities|Factorization identities]]). Let  $  \phi ( \lambda ) = {\mathsf E} e ^ {i \lambda X _ {1} } $
 +
be the characteristic function of  $  X _ {1} $.
 +
Then, when  $  | z | \leq  1 $
 +
and  $  \mathop{\rm Im}  \lambda = 0 $,
  
\mathbf{Example 1 } (First passage time distribution of a simple
+
$$ \tag{4 }
random walk). Let $y$ be a positive integer, and $F_{n,y}$ the event
+
1 - z \phi ( \lambda )  = \  
that the random walk, starting at zero, reaches $y$ for the first time
+
[ 1 - {\mathsf E} ( e ^ {i \lambda \chi _ {+} }
at time $n$, i.e., $F_{n,y}= \{ S_j\neq y, \text{for}\; 0\leq j<n,
+
z ^ {Y _ {+} } ;  Y _ {+} < \infty ) ] \times
S_n =y \} $, $n$ and $y$ of the same parity. Altogether there are
+
$$
$C_{\frac{n+y}{2}}^n$ paths from $(0,0)$ to $(n,y)$, each having
 
probability $p^{(n+y)/2} q^{(n-y)/2}$. Of these, the number which
 
cross or touch the level $y$ prior to time n and for which
 
$S_{n-1}=y-1$ is, by the reflection principle,
 
$C_{\frac{n+y}{2}}^{n-1}$. Also the number for which $S_{n-1} =y+1$ is
 
$C_{\frac{n+y}{2}}^{n-1}$. Subtracting these two from the number
 
$C_{\frac{n+y}{2}}^n$ of all paths, one obtains, for all $y\neq0
 
$(treating the case $ y<0$ by symmetry),
 
  
 +
$$
 +
\times
 +
[ 1 - {\mathsf E} ( e ^ {i \lambda \chi _ {-} }
 +
z ^ {Y _ {-} } ;  Y _ {-} < \infty ) ] .
 
$$
 
$$
\label{eqn:rabi1}
 
P(F_{n,y})= (C_{\frac{n+y}{2}}^n - 2 C_{\frac{n+y}{2}}^{n-1}) p^{(n+y)/2} q^{(n-y)/2}= (|y| /n) C_{\frac{n+y}{2}}^n p^{(n+y)/2} q^{(n-y)/2}
 
\tag{2}$$
 
  
$(n=|y|,|y|+2,|y|+4,...).$
+
This identity reveals the connection between boundary problems for random walks and those in the theory of functions of a complex variable, since the factors on the right-hand side of (4) are uniquely determined by those in the canonical factorization of the function  $ 1 - z \phi ( \lambda ) $
 +
on the axis  $  \mathop{\rm Im}  \lambda = 0 $,
 +
that is, the expansion of this function as a product  $  1 - z \phi ( \lambda ) = A _ {z _ {+}  } ( \lambda ) A _ {z _ {-}  } ( \lambda ) $
 +
for  $  \mathop{\rm Im}  \lambda = 0 $,  
 +
where  $  A _ {z _  \pm  } ( \lambda ) $
 +
are analytic in the upper and lower half-planes, respectively, do not have zeros and are continuous there (including the boundary). In the identity (4), when  $  z = 1 $
 +
one can replace the first factor by
 +
 
 +
$$
 +
 
 +
\frac{1 - {\mathsf P} \{ Y _ {+} < \infty \} }{1 - {\mathsf E} e ^ {i \lambda S } }
 +
  = \
 +
1 - {\mathsf E} ( e ^ {i \lambda \chi _ {+} } ;  Y _ {+} < \infty ) .
 +
$$
  
 +
Identity (4) is just one of a group of factorization identities connecting the distributions of various boundary functionals. Another one is related to the Pollaczek–Spitzer identity
  
One may also consider the simple symmetric random walk $S_0=x$, $S_n = x +X_1+...+X_n$ $(n\geq 1)$, in dimension $d\geq1$, as a stochastic process on the d-dimensional lattice $Z^d$, with $X_n(n\geq1) $ i.i.d. random vectors, taking values $\pm e_j (j=1,...,d)$, each with probability $1/2d$. Here $e_j$ is the vector whose
+
$$  
$j$-th coordinate is 1 and the remaining $d-1$ coordinates are zero. It was proved by G. Polya in 1921
+
\sum_{n=0}^  \infty 
that this walk is recurrent in dimensions 1,2, and transient in higher dimensions.
+
^ {n} {\mathsf E} e ^ {i \lambda \overline{S}\; _ {n} }  = \
 +
\mathop{\rm exp} \left \{ \sum_{n=1} ^  \infty 
  
 +
\frac{z  ^ {n} }{n}
 +
{\mathsf E}
 +
e ^ {i \lambda \max ( 0 , S _ {n} ) } \right \} ,
 +
$$
  
De Moivre (1756) obtained the normal approximation to the binomial probability $P( S_n = y\mid S_0=0)$, as a combinatorial result. The full potential of this was realized by Laplace (1812) who formulated and derived the far reaching central limit theorem (''CLT''). Apparently, Gauss knew about the normal distribution as early as 1794, and assuming this as the distribution of errors of measurement, he obtained his famous method of least squares. Hence the name Gaussian distribution is often used for the normal distribution. The final version of the ''CLT'' for a ''general random walk'' $S_n= X_1+...+X_n$ $(n\geq 1)$, where $X_n$ are arbitrary independent identically distributed (''i.i.d.'') random variables with mean zero and finite variance $\sigma^2>0$, was obtained by Le'vy (1925): $ n^{-1/2}(X_1+...+X_n)$ converges in distribution to the normal distribution $N(0, \sigma^2)$ with mean zero and variance $\sigma^2$, as $n\rightarrow \infty$. In physical terms, this result says the following: if time and length are rescaled so that in one unit of rescaled time there are a large number n of i.i.d. displacements of small rescaled lengths of order $1/\sqrt{n}$, then the random walk displacements over a period of time t will appear as Gaussian with mean zero and variance $t\sigma^2$, the increments over disjoint intervals being independent. That such a Gaussian process exists with continuous sample paths was proved rigorously by N.Wiener in 1923. This process is called the ''Brownian motion'', following its implicit use by A. Einstein in 1905-6 to describe the kinetic motion of colloidal molecules in a liquid, experimentally observed earlier by the botanist R. Brown. Interestingly, even before Einstein, Bachelier (1900) described the random movements of stocks by this Gaussian process. The statement that the rescaled random walk $S_n (n=0,1,2,...)$ converges in distribution to Brownian motion was proved rigorously by M. Donsker in 1951, and this result is known as the functional central limit theorem (''FCLT''). Both the''CLT'' and the ''FCLT'' extend to arbitrary dimensions d.
+
where  $ \overline{S}\; _ {n} = \max ( 0 , S _ {1} \dots S _ {n} ) $.  
 +
Factorization identities provide a powerful method for studying boundary problems for random walks (see [[#References|[1]]], Vol. 2; [[#References|[2]]], [[#References|[3]]]).
  
 +
Boundary problems for random walks have been investigated rather completely, including their asymptotic analysis (see [[#References|[1]]], Vol. 2; [[#References|[4]]]–).
  
As consequences of the ''FCLT'', one can derive many asymptotic results for the simple symmetric random walk given by the corresponding result for the limiting Brownian motion. Conversely, by
+
Analytically, the solution of boundary problems leads to integro-difference equations. For example, the probability $ u _ {n} ( x , a , b ) $
evaluating combinatorially some probability associated with the random walk, one may derive the corresponding probability for the Brownian motion. A Brownian motion with variance parameter $\sigma^2 =1$ is called a standard Brownian motion, and denoted $\{B_t:t\geq0\}$ below.
+
that a particle starting from a point  $  x \in ( - b , a ) $
 +
will leave this interval in time  $  n $
 +
satisfies the following equation (the total probability formula at the first jump):
  
 +
$$
 +
u _ {n+1} ( x , a , b )  = \
 +
\int\limits _ { - } b- x ^ { a- }  x u _ {n} ( x + y , a , b )  d F ( y) + 1 +
 +
$$
  
\mathbf{Example 2} (Boundary hitting probability of Brownian motion). Let $c\leq x\leq d$ be arbitrary reals. Then, using the corresponding result for the scaled random walk, one obtains
+
$$
 +
- F ( a - x ) + F ( - b - x ) ,
 
$$
 
$$
P(\{B_t:t\geq 0\}\text{ reaches c before }  d \mid B_0=x) = (d-x)/(d-c ).
 
\tag{3}$$
 
  
\mathbf{ Example 3} (Arcsine law). Let $U$ denote the amount of time in $[0,1]$ the Brownian motion spends above zero, i.e., $ U=$ Lebesgue measure of the set $\{t: 0\leq t\leq 1: B_t>0\}$, given $B_0 =0$. Consider the polygonal path of the simple symmetric random walk $S_j (j=0,1,...n)$, starting at zero. By combinatorial arguments, such as the reflection principle, one can calculate exactly the proportion of times the polygonal path lies above zero and, by the ''FCLT'', this yields
+
where  $  F ( x) = {\mathsf P} \{ X _ {1} < x \} $.  
 +
By passing to the generating function  $ u ( z , x , a , b ) = \sum_{n=1}^  \infty  z  ^ {n} u _ {n} ( x , a , b ) $
 +
one obtains the usual integral equations. There are at least two approaches to the investigation of asymptotic properties of solutions of these equations. One of them is based on the study of analytic properties of the double transformation
 +
 
 +
$$
 +
U ( z , \lambda , a , b ) = \
 +
\int\limits e ^ {i \lambda x } u ( z , x , a , b )  d x
 
$$
 
$$
P(U \leq x ) = (2/\pi) sin^{-1}\sqrt{x} \text{ (} 0\leq x\leq 1 \text{ )}.
 
\tag{4}$$
 
  
\mathbf
+
and its subsequent inversion (see [[#References|[5]]]–). The other involves the methods of Vishik and Lyusternik for solving equations with a small parameter (see [[#References|[4]]] and [[Differential equations with small parameter|Differential equations with small parameter]]). The latter reveals profound connections between these problems and potential theory.
 +
 
 +
Much of the above carries over to the case of random walks with dependent jumps, when the random variables  $  S _ {n} $
 +
are connected in a [[Markov chain|Markov chain]], and also to multi-dimensional random walks in  $  \mathbf R  ^ {d} $,
 +
$  d > 1 $(
 +
see , [[#References|[7]]]).
 +
 
 
====References====
 
====References====
{|
+
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> W. Feller, [[Feller, "An introduction to probability theory and its  applications"|"An introduction to probability theory and its applications"]], '''1–2''', Wiley (1950–1966)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> A.A. Borovkov, "Wahrscheinlichkeitstheorie", Birkhäuser (1976) (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> A.A. Borovkov, "Stochastic processes in queueing theory", Springer (1976) (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> V.S. Korolyuk, Yu.V. Borouvskikh, "Analytic problems in the asymptotics of probability distributions", Kishinev (1981) (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> A.A. Borovkov, "New limit theorems in boundary value problems for sums of independent terms" ''Sibirsk. Mat. Zh.'', '''3''' : 5 (1962) pp. 645–694 (In Russian)</TD></TR><TR><TD valign="top">[6a]</TD> <TD valign="top"> V.I. Lotov, "Asymptotic analysis of distributions in problems with two boundaries I" ''Theory Probab. Appl.'', '''24''' (1980) pp. 480–491 ''Teor. Veroyatn. i Primenen.'', '''24''' : 3 (1979) pp. 475–485</TD></TR><TR><TD valign="top">[6b]</TD> <TD valign="top"> V.I. Lotov, "Asymptotic analysis of distributions in problems with two boundaries II" ''Theory Probab. Appl.'', '''24''' (1980) pp. 869–876 ''Teor. Veroyatn. i Primenen.'', '''24''' : 4 (1979) pp. 873–879</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> F. Spitzer, "Principles of random walk", v. Nostrand (1964)</TD></TR></table>
|-
 
|valign="top"|{{Ref|1}}||valign="top"|  Bachelier, L. (1900). The'orie de la speculation. ''Ann. Sci. Ecole Norm. Sup.'' '''17''', 21-86.
 
|-
 
|valign="top"|{{Ref|2}}||valign="top"|  Bhattacharya, R.N. and Waymire, E.C. (2009).''Stochastic Processes with Applications.'' SIAM Classics in Applied Mathematics, vol. 61. SIAM, Philadelphia.
 
|-
 
|valign="top"|{{Ref|3}}||valign="top"|  Billingsley, P. (1968).'' Convergence of Probability Measures''. Wiley, New York.
 
|-
 
|valign="top"|{{Ref|4}}||valign="top"|  De Moivre, A. (1756). ''Doctrine of Chance''. London.
 
|-
 
|valign="top"|{{Ref|5}}||valign="top"|  Durrett, R. (1995). ''Probability: Theory and Examples''. 2nd edition. Duxbury, Belmont.
 
|-
 
|valign="top"|{{Ref|6}}||valign="top"|  Feller, W. (1968).'' An Introduction to Probability Theory and Its Applications''. Vol.1. 3rd edition. Wiley, New York.
 
|-
 
|valign="top"|{{Ref|7}}||valign="top"|  Laplace, P.S. (1812). ''The The'orie Analitique des Probabilite's''. Paris.
 
|-
 
|valign="top"|{{Ref|8}}||valign="top"|  Le'vy, P. (1925). ''Calcul des Probabilite's''. Gauthier-Villars. Paris.
 
|-
 
|valign="top"|{{Ref|9}}||valign="top"Spitzer, F. (1964). ''Principles of Random Walk''. Van Nostrand. Princeton, NJ.
 
|-
 
|valign="top"|{{Ref|10}}||valign="top"|  \footnotetext[1]{Reprinted with permission from Lovric, Miodrag (2011), International Encyclopedia of Statistical Science. Heidelberg: Springer Science+Business Media, LLC}
 
|-
 
|}
 
  
 +
====Comments====
 +
For applications to the physical and biological sciences see [[#References|[a7]]] and the references quoted therein.
  
<references />
+
====References====
 
+
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  M.N. Barber,  B.W. Ninham,  "Random and restricted walks, theory and applications" , Gordon &amp; Breach  (1970)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  K.L. Chung,  "Markov chains with stationary transition probabilities" , Springer  (1960)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  A. Gut,  "Stopped random walks. Limit theorems and applications" , Springer  (1988)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  J.G. Kemeny,  J.L. Snell,  A.W. Knapp,  "Denumerable Markov chains" , Springer  (1976)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  J.H.B. Kemperman,  "The passage problem for a stationary Markov chain" , Univ. Chicago Press  (1961)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  P. Lévy,  "Processus stochastiques et mouvement Brownien" , Gauthier-Villars  (1965)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  E.W. Montroll,  B.J. West,  "On an enriched collection of stochastic processes"  E.W. Montroll (ed.)  J.L. Lebowitz (ed.) , ''Fluctuation Phenomena'' , ''Series in Statistical Mechanics'' , '''7''' , Elsevier  (1987)  pp. Chapt. 2</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top">  D. Revuz,  "Markov chains" , North-Holland  (1975)</TD></TR></table>
[[Category:Statprob]]
 

Latest revision as of 20:12, 12 January 2024


A stochastic process of special form that can be interpreted as a model describing the movement of a particle in a certain state space under the action of some random mechanism. The state space is usually a $ d $- dimensional Euclidean space or the integral lattice in it. The random mechanism can be of various kinds; the most common random walks are those generated by summation of independent random variables or by Markov chains. There is no precise generally-accepted definition of a random walk.

The trajectories of the simplest random walk in the case $ d = 1 $ are described by the initial position $ S _ {0} = 0 $ and the sequence of sums

$$ \tag{1 } S _ {n} = X _ {1} + \dots + X _ {n} ,\ n = 1 , 2 \dots $$

where the $ X _ {i} $ are independent and have a Bernoulli distribution:

$$ {\mathsf P} \{ X _ {i} = 1 \} = p ,\ \ {\mathsf P} \{ X _ {i} = - 1 \} = q = 1 - p ,\ p \in ( 0 , 1 ) . $$

The value of $ S _ {n} $ can be interpreted as the gain of one of two players after $ n $ games in each of which this player wins one dollar with probability $ p $ and loses it with probability $ 1 - p $. If the game consists of tossing an unbiased coin, then it is assumed that $ p = 1/2 $( a symmetric walk, see Bernoulli random walk). Under the assumption that the initial capital of the first player is $ b $ and that of the second player is $ a $, the game will finish when the moving particle (with coordinates $ S _ {1} , S _ {2} ,\dots $) first touches one of the levels $ a $ or $ - b $. At this moment, one of the players is ruined. This is the classical ruin problem, in which the boundaries at points $ a $ and $ - b $ can be regarded as absorbing.

In applications to queueing theory, the behaviour of the particle near the boundaries $ a $ and $ - b $ can differ: e.g., if $ a = \infty $ and $ b = 0 $, then the position $ Z _ {n+1} $ of the random particle at the moment $ n + 1 $ is given by

$$ \tag{2 } Z _ {n+1} = \max ( 0 , Z _ {n} + X _ {n+1} ) , $$

and the boundary at $ 0 $ can be called detaining or reflecting. There are also other possibilities for the behaviour of the particle in a neighbourhood of the boundaries.

If $ a = \infty $, then one obtains the problem of a random walk with one boundary. If $ a = b = \infty $, then one obtains an unrestricted random walk. Random walks are usually studied using the apparatus of discrete Markov chains and, in particular, by investigating the corresponding finite-difference equations. For example, in the ruin problem, let $ u _ {k} $ be the probability that the first player is ruined, given that his initial capital is equal to $ k $, $ 0 \leq k \leq a + b $, where the total capital of both players is fixed and equal to $ a + b $. Then, by the total probability formula (at the first jump), it follows that $ u _ {k} $ satisfies the equation

$$ u _ {k} = p u _ {k+1} + qu _ {k-1} ,\ 0 < k < a + b , $$

and the boundary conditions $ u _ {a+b} = 0 $, $ u _ {0} = 1 $. Thus one obtains

$$ u _ {b} = \frac{\left ( \frac{q}{p} \right ) ^ {a+b} - \left ( \frac{q}{p} \right ) ^ {b} }{\left ( \frac{q}{p} \right ) ^ {a+b - 1} } \ \ \textrm{ when } p \neq q ; $$

$$ u _ {b} = \frac{a}{a+b} \ \textrm{ when } p = q = \frac{1}{2} . $$

The second of these formulas shows that even a "fair" game (in which both players have identical chances) leads to ruin with probability close to 1, provided that the capital $ a $ of the second player is large in comparison to $ b $( $ u _ {b} = 1 $ when $ b < \infty $, $ a = \infty $).

The ruin problem has been thoroughly investigated. For example, it was shown by J. Lagrange (see [1], Vol. 1) that the probability $ u _ {b,n} $ of ruin of the first player at the $ n $- th step, given that this initial capital is $ b $, where the total capital is $ a+ b $( fixed), is equal to

$$ u _ {b,n} = ( a + b ) ^ {-} 1 2 ^ {n} p ^ {( n- b )/ 2 } q ^ {( n+ b )/ 2 } \times $$

$$ \times \sum_{k=1}^ {a+b-1} \cos ^ {n-1} \frac{\pi k }{a+b} \sin \frac{\pi k }{a+b} \sin \frac{\pi b k }{a+b} . $$

The mean time before ruin of one of the players, $ m _ {b} $, is given by

$$ m _ {b} = \frac{b}{q-} p - a+ \frac{b}{q-} p \frac{1 - \left ( \frac{q}{p} \right ) ^ {b} }{1 - \left ( \frac{q}{p} \right ) ^ {a+b} } \ \textrm{ if } \ p \neq q ; $$

$$ m _ {b} = a b \ \textrm{ if } p = q = \frac{1}{2} . $$

For random walks with one boundary $ ( a = \infty ) $, described by (2), there is a stationary distribution for the random walk when $ p < q $ and $ n \rightarrow \infty $, coinciding with the distribution of the random variable $ S = \sup _ {k \geq 0 } S _ {k} $ and

$$ \tag{3 } {\mathsf P} \{ S \geq k \} = \ \left ( \frac{p}{q} \right ) ^ {k} ,\ \ k = 0 , 1 ,\dots . $$

The laws describing an unrestricted random walk follow from theorems about the behaviour of the sequence of partial sums $ S _ {n} $, $ n = 1 , 2 ,\dots $. One of these laws confirms that for a symmetric random walk $ ( p = 1/2 ) $, the particle hits (infinitely often) any fixed point $ a $ with probability 1. When $ p < 1/2 $, the walk departs to the left with probability 1; in this case $ S < \infty $ with probability 1.

For a symmetric random walk, the time $ K _ {n} $ spent by the particle on the positive half-line (the number of positive terms in the sequence $ S _ {1} \dots S _ {n} $) will be closer to $ 0 $ or $ n $ than to $ n / 2 $ with a large probability. This is seen from the so-called arcsine law, which implies that for large $ n $( see [1], Vol. 1),

$$ {\mathsf P} \left \{ \frac{K _ n}{n} < x \right \} \approx \frac{2} \pi { \mathop{\rm arc} \sin } \sqrt x . $$

This implies, for example, that

$$ {\mathsf P} \left \{ \left | \frac{K _ {n} }{n} - \frac{1}{2} \right | < \frac{1}{4} \right \} = \ 1 - 2 {\mathsf P} \left \{ \frac{K _ {n} }{n} < \frac{1}{4} \right \} \approx \ 1 - \frac{4} \pi \frac \pi {6} = \frac{1}{3} $$

and that with probability $ 0 .2 $, the particle spends at least $ 97.6 \pct $ of the whole time on one side.

There are relations connecting random walks with boundaries with unrestricted random walks. For example, if one assumes that $ Y ( x) = \min \{ {k } : {S _ {k} \geq x } \} $, then (see [2])

$$ {\mathsf P} \{ Y ( x) = n \} = \ \frac{x}{n} {\mathsf P} \{ S _ {n} = x \} . $$

In an unrestricted random walk, the position $ S _ {n} $ of the particle for large $ n $ is described by the law of large numbers and the central limit theorem.

If the magnitudes $ \pm 1 $ of the jumps are changed to $ \pm \Delta $( for small $ \Delta $) and if one assumes that $ p = ( 1 + \alpha \Delta ) / 2 $, then the position $ S _ {n} $ of the particle after $ n = t \Delta ^ {-} 2 $ steps will describe approximately (as $ \Delta \rightarrow 0 $) the behaviour at time $ t $ of the diffusion process with drift $ \alpha $, diffusion coefficient 1, and corresponding behaviour on the boundaries $ a $ and $ - b $( if these are specified for the random walk $ S _ {n} $).

The notion of a random walk as described above can be generalized in many ways. The simplest random walks in $ \mathbf R ^ {d} $ for $ d > 1 $ are defined as follows. The particle leaves the origin and moves one step of length 1 in one of the $ 2 d $ directions parallel to the coordinate axes. Thus, its possible positions are the points of $ \mathbf R ^ {d} $ with integer coordinates. To define a random walk it is necessary to specify $ 2 d $ probabilities corresponding to the different directions. One obtains a symmetric random walk if all these are equal to $ 1/2d $. In the multi-dimensional case, the problem of a random walk with boundaries becomes much more complicated, since the shape of the boundaries becomes essentially more complex when $ d > 1 $. In the unrestricted case, Pólya's theorem holds (see [1], Vol. 1): for a symmetric random walk when $ d \leq 2 $, the particle will sooner or later return to its initial position with probability 1 (once, and thus infinitely often); but when $ d= 3 $ this probability is approximately equal to 0.35 (and when $ d > 3 $ it is still smaller).

Another possible generalization of the simplest random walk consists of taking arbitrarily distributed independent random variables $ X _ {1} , X _ {2} ,\dots $ in (1). In this case, the basic qualitative laws for unrestricted random walks and for random walks with boundaries are preserved. For example, the particle reaches one of the boundaries $ a $ or $ - b $ with probability 1. If $ {\mathsf E} X _ {i} \leq 0 $ and $ a = \infty $, then the boundary $ - b $ will be reached with probability 1. If the $ X _ {i} $ are integer valued and $ {\mathsf E} X _ {i} = 0 $, then the particle will return to the initial position with probability 1. For arbitrarily distributed $ X _ {i} $ with $ {\mathsf E} X _ {i} = 0 $, this statement only holds in case one considers return to an interval, not to a point.

The solution of problems concerned with the exit of random walks from an interval $ ( - b , a ) $ turns out to be much more difficult in the general case. At the same time, these problems have many applications in mathematical statistics (sequential analysis), in the insurance business, in queueing theory, etc. In their investigation, a decisive role is played by various functionals of a random walk $ \{ S _ {n} \} $, $ n = 0 , 1 \dots $ called boundary functionals. Among these are $ S = \sup _ {k \geq 0 } S _ {k} $, the time of the first passage through zero (in a positive direction), $ Y _ {+} = \min \{ {k } : {S _ {k} > 0 } \} $, the time of the first passage through zero (in a negative direction), $ Y _ {-} = \min \{ {k \geq 1 } : {S _ {k} \leq 0 } \} $, the first positive sum $ \chi _ {+} = S _ {Y _ {+} } $, and the first non-positive sum $ \chi _ {-} = S _ {Y _ {-} } $, etc. It turns out that the distribution of the jumps $ X _ {i} $ is connected with the distribution of these functionals by the following so-called factorization identity (see [1], Vol. 1; [2], [3]; see also Factorization identities). Let $ \phi ( \lambda ) = {\mathsf E} e ^ {i \lambda X _ {1} } $ be the characteristic function of $ X _ {1} $. Then, when $ | z | \leq 1 $ and $ \mathop{\rm Im} \lambda = 0 $,

$$ \tag{4 } 1 - z \phi ( \lambda ) = \ [ 1 - {\mathsf E} ( e ^ {i \lambda \chi _ {+} } z ^ {Y _ {+} } ; Y _ {+} < \infty ) ] \times $$

$$ \times [ 1 - {\mathsf E} ( e ^ {i \lambda \chi _ {-} } z ^ {Y _ {-} } ; Y _ {-} < \infty ) ] . $$

This identity reveals the connection between boundary problems for random walks and those in the theory of functions of a complex variable, since the factors on the right-hand side of (4) are uniquely determined by those in the canonical factorization of the function $ 1 - z \phi ( \lambda ) $ on the axis $ \mathop{\rm Im} \lambda = 0 $, that is, the expansion of this function as a product $ 1 - z \phi ( \lambda ) = A _ {z _ {+} } ( \lambda ) A _ {z _ {-} } ( \lambda ) $ for $ \mathop{\rm Im} \lambda = 0 $, where $ A _ {z _ \pm } ( \lambda ) $ are analytic in the upper and lower half-planes, respectively, do not have zeros and are continuous there (including the boundary). In the identity (4), when $ z = 1 $ one can replace the first factor by

$$ \frac{1 - {\mathsf P} \{ Y _ {+} < \infty \} }{1 - {\mathsf E} e ^ {i \lambda S } } = \ 1 - {\mathsf E} ( e ^ {i \lambda \chi _ {+} } ; Y _ {+} < \infty ) . $$

Identity (4) is just one of a group of factorization identities connecting the distributions of various boundary functionals. Another one is related to the Pollaczek–Spitzer identity

$$ \sum_{n=0}^ \infty z ^ {n} {\mathsf E} e ^ {i \lambda \overline{S}\; _ {n} } = \ \mathop{\rm exp} \left \{ \sum_{n=1} ^ \infty \frac{z ^ {n} }{n} {\mathsf E} e ^ {i \lambda \max ( 0 , S _ {n} ) } \right \} , $$

where $ \overline{S}\; _ {n} = \max ( 0 , S _ {1} \dots S _ {n} ) $. Factorization identities provide a powerful method for studying boundary problems for random walks (see [1], Vol. 2; [2], [3]).

Boundary problems for random walks have been investigated rather completely, including their asymptotic analysis (see [1], Vol. 2; [4]–).

Analytically, the solution of boundary problems leads to integro-difference equations. For example, the probability $ u _ {n} ( x , a , b ) $ that a particle starting from a point $ x \in ( - b , a ) $ will leave this interval in time $ n $ satisfies the following equation (the total probability formula at the first jump):

$$ u _ {n+1} ( x , a , b ) = \ \int\limits _ { - } b- x ^ { a- } x u _ {n} ( x + y , a , b ) d F ( y) + 1 + $$

$$ - F ( a - x ) + F ( - b - x ) , $$

where $ F ( x) = {\mathsf P} \{ X _ {1} < x \} $. By passing to the generating function $ u ( z , x , a , b ) = \sum_{n=1}^ \infty z ^ {n} u _ {n} ( x , a , b ) $ one obtains the usual integral equations. There are at least two approaches to the investigation of asymptotic properties of solutions of these equations. One of them is based on the study of analytic properties of the double transformation

$$ U ( z , \lambda , a , b ) = \ \int\limits e ^ {i \lambda x } u ( z , x , a , b ) d x $$

and its subsequent inversion (see [5]–). The other involves the methods of Vishik and Lyusternik for solving equations with a small parameter (see [4] and Differential equations with small parameter). The latter reveals profound connections between these problems and potential theory.

Much of the above carries over to the case of random walks with dependent jumps, when the random variables $ S _ {n} $ are connected in a Markov chain, and also to multi-dimensional random walks in $ \mathbf R ^ {d} $, $ d > 1 $( see , [7]).

References

[1] W. Feller, "An introduction to probability theory and its applications", 1–2, Wiley (1950–1966)
[2] A.A. Borovkov, "Wahrscheinlichkeitstheorie", Birkhäuser (1976) (Translated from Russian)
[3] A.A. Borovkov, "Stochastic processes in queueing theory", Springer (1976) (Translated from Russian)
[4] V.S. Korolyuk, Yu.V. Borouvskikh, "Analytic problems in the asymptotics of probability distributions", Kishinev (1981) (In Russian)
[5] A.A. Borovkov, "New limit theorems in boundary value problems for sums of independent terms" Sibirsk. Mat. Zh., 3 : 5 (1962) pp. 645–694 (In Russian)
[6a] V.I. Lotov, "Asymptotic analysis of distributions in problems with two boundaries I" Theory Probab. Appl., 24 (1980) pp. 480–491 Teor. Veroyatn. i Primenen., 24 : 3 (1979) pp. 475–485
[6b] V.I. Lotov, "Asymptotic analysis of distributions in problems with two boundaries II" Theory Probab. Appl., 24 (1980) pp. 869–876 Teor. Veroyatn. i Primenen., 24 : 4 (1979) pp. 873–879
[7] F. Spitzer, "Principles of random walk", v. Nostrand (1964)

Comments

For applications to the physical and biological sciences see [a7] and the references quoted therein.

References

[a1] M.N. Barber, B.W. Ninham, "Random and restricted walks, theory and applications" , Gordon & Breach (1970)
[a2] K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1960)
[a3] A. Gut, "Stopped random walks. Limit theorems and applications" , Springer (1988)
[a4] J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976)
[a5] J.H.B. Kemperman, "The passage problem for a stationary Markov chain" , Univ. Chicago Press (1961)
[a6] P. Lévy, "Processus stochastiques et mouvement Brownien" , Gauthier-Villars (1965)
[a7] E.W. Montroll, B.J. West, "On an enriched collection of stochastic processes" E.W. Montroll (ed.) J.L. Lebowitz (ed.) , Fluctuation Phenomena , Series in Statistical Mechanics , 7 , Elsevier (1987) pp. Chapt. 2
[a8] D. Revuz, "Markov chains" , North-Holland (1975)
How to Cite This Entry:
Random walk. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Random_walk&oldid=37790
This article was adapted from an original article by A.A. Borovkov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article