Namespaces
Variants
Actions

Difference between revisions of "Stationary stochastic process"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (MR/ZBL numbers added)
Line 1: Line 1:
 
''stochastic process, homogeneous in time''
 
''stochastic process, homogeneous in time''
  
A [[Stochastic process|stochastic process]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872801.png" /> whose statistical characteristics do not change in the course of time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872802.png" />, i.e. are invariant relative to translations in time: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872803.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872804.png" />, for any fixed value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872805.png" /> (either a real number or an integer, depending on whether one is dealing with a stochastic process in continuous or in discrete time). The concept of a stationary stochastic process is widely used in applications of probability theory in various areas of natural science and technology, since these processes accurately describe many real phenomena accompanied by unordered fluctuations. For example, the pulsations of the force of a current or the voltage in an electrical chain (electrical "noise" ) can be considered as stationary stochastic processes if the chain is in a stationary system; the pulsations of velocity or pressure at a point of a turbulent flow are stationary stochastic processes if the flow is stationary, etc.
+
A [[Stochastic process|stochastic process]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872801.png" /> whose statistical characteristics do not change in the course of time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872802.png" />, i.e. are invariant relative to translations in time: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872803.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872804.png" />, for any fixed value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872805.png" /> (either a real number or an integer, depending on whether one is dealing with a stochastic process in continuous or in discrete time). The concept of a stationary stochastic process is widely used in applications of probability theory in various areas of natural science and technology, since these processes accurately describe many real phenomena accompanied by unordered fluctuations. For example, the pulsations of the force of a current or the voltage in an electrical chain (electrical "noise" ) can be considered as stationary stochastic processes if the chain is in a stationary system; the pulsations of velocity or pressure at a point of a turbulent flow are stationary stochastic processes if the flow is stationary, etc.
  
 
In the mathematical theory of stationary stochastic processes, an important role is played by the moments of the probability distribution of the process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872806.png" />, and especially by the moments of the first two orders — the mean value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872807.png" />, and its covariance function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872808.png" />, or, equivalently, the correlation function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872809.png" />. In much of the research into the theory of stationary stochastic processes, the properties that are completely defined by the characteristics <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728010.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728011.png" /> alone are studied (the so-called correlation theory or theory of second-order stationary stochastic processes). Accordingly, the stochastic processes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728012.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728013.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728014.png" /> do not depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728015.png" /> are often separated into a special class and are called stationary stochastic processes in the wide sense. The more special stochastic processes, none of whose characteristics change with time (so that the distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728016.png" /> of an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728017.png" />-dimensional random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728018.png" /> depends here, for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728019.png" />, only on the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728020.png" /> differences <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728021.png" />) are called stationary stochastic processes in the strict sense. Accordingly, the theory of stationary stochastic processes is divided into the theory of stationary stochastic processes in the strict sense and in the wide sense, with a different mathematical apparatus used in each.
 
In the mathematical theory of stationary stochastic processes, an important role is played by the moments of the probability distribution of the process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872806.png" />, and especially by the moments of the first two orders — the mean value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872807.png" />, and its covariance function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872808.png" />, or, equivalently, the correlation function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s0872809.png" />. In much of the research into the theory of stationary stochastic processes, the properties that are completely defined by the characteristics <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728010.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728011.png" /> alone are studied (the so-called correlation theory or theory of second-order stationary stochastic processes). Accordingly, the stochastic processes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728012.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728013.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728014.png" /> do not depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728015.png" /> are often separated into a special class and are called stationary stochastic processes in the wide sense. The more special stochastic processes, none of whose characteristics change with time (so that the distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728016.png" /> of an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728017.png" />-dimensional random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728018.png" /> depends here, for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728019.png" />, only on the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728020.png" /> differences <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s08728021.png" />) are called stationary stochastic processes in the strict sense. Accordingly, the theory of stationary stochastic processes is divided into the theory of stationary stochastic processes in the strict sense and in the wide sense, with a different mathematical apparatus used in each.
Line 51: Line 51:
 
The spectral decomposition of the correlation function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280103.png" />, defined by formula (3), demonstrates that the mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280104.png" />, which maps elements <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280105.png" /> of the Hilbert space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280106.png" /> to elements <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280107.png" /> of the Hilbert space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280108.png" /> of complex-valued functions on the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280109.png" /> with a modulus, square-integrable with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280110.png" />, is an isometric mapping of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280111.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280112.png" />. This mapping can be extended to an isometric linear mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280113.png" /> of the whole space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280114.png" /> onto the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280115.png" />, a fact that allows one to reformulate many problems in the theory of stationary stochastic processes in the wide sense as problems in function theory.
 
The spectral decomposition of the correlation function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280103.png" />, defined by formula (3), demonstrates that the mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280104.png" />, which maps elements <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280105.png" /> of the Hilbert space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280106.png" /> to elements <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280107.png" /> of the Hilbert space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280108.png" /> of complex-valued functions on the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280109.png" /> with a modulus, square-integrable with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280110.png" />, is an isometric mapping of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280111.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280112.png" />. This mapping can be extended to an isometric linear mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280113.png" /> of the whole space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280114.png" /> onto the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280115.png" />, a fact that allows one to reformulate many problems in the theory of stationary stochastic processes in the wide sense as problems in function theory.
  
A significant part of the theory of stationary stochastic processes in the wide sense is devoted to methods of solving linear approximation problems for such processes, i.e. methods of locating a linear combination of any "known" values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280116.png" /> that best approximates (in the sense of the minimum least-square error) a certain "unknown" value of the same process or any "unknown" random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280117.png" />. In particular, the problem of optimal linear extrapolation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280118.png" /> consists of finding the best approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280119.png" /> of the value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280120.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280121.png" />, that linearly depends on the "past values" of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280122.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280123.png" />; the problem of optimal linear interpolation consists of finding the best approximation for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280124.png" /> that linearly depends on the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280125.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280126.png" /> runs through all values that do not belong to a specific interval of the time axis (to which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280127.png" /> does belong); the problem of optimal linear filtering can be formulated as the problem of finding the best approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280128.png" /> for a certain random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280129.png" /> (which is usually the value for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280130.png" /> of a stationary stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280131.png" />, correlated with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280132.png" />, whereby <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280133.png" /> most often plays the part of a "signal" , while <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280134.png" /> is the sum of the "signal" and a "noise" <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280135.png" /> that interferes with it, and the sum is known from the observations) that linearly depends on the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280136.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280137.png" /> (see [[Stochastic processes, prediction of|Stochastic processes, prediction of]]; [[Stochastic processes, filtering of|Stochastic processes, filtering of]]; [[Stochastic processes, interpolation of|Stochastic processes, interpolation of]]).
+
A significant part of the theory of stationary stochastic processes in the wide sense is devoted to methods of solving linear approximation problems for such processes, i.e. methods of locating a linear combination of any "known" values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280116.png" /> that best approximates (in the sense of the minimum least-square error) a certain "unknown" value of the same process or any "unknown" random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280117.png" />. In particular, the problem of optimal linear extrapolation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280118.png" /> consists of finding the best approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280119.png" /> of the value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280120.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280121.png" />, that linearly depends on the "past values" of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280122.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280123.png" />; the problem of optimal linear interpolation consists of finding the best approximation for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280124.png" /> that linearly depends on the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280125.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280126.png" /> runs through all values that do not belong to a specific interval of the time axis (to which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280127.png" /> does belong); the problem of optimal linear filtering can be formulated as the problem of finding the best approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280128.png" /> for a certain random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280129.png" /> (which is usually the value for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280130.png" /> of a stationary stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280131.png" />, correlated with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280132.png" />, whereby <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280133.png" /> most often plays the part of a "signal" , while <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280134.png" /> is the sum of the "signal" and a "noise" <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280135.png" /> that interferes with it, and the sum is known from the observations) that linearly depends on the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280136.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280137.png" /> (see [[Stochastic processes, prediction of|Stochastic processes, prediction of]]; [[Stochastic processes, filtering of|Stochastic processes, filtering of]]; [[Stochastic processes, interpolation of|Stochastic processes, interpolation of]]).
  
All these problems reduce geometrically to the problem of projecting a point of the Hilbert space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280138.png" /> (or of its extension) orthogonally onto a given subspace of this space. Relying on this geometric interpretation and on the isomorphism of the spaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280139.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280140.png" />, A.N. Kolmogorov has deduced general formulas that make it possible to determine the mean-square error of optimal linear extrapolation or interpolation, corresponding to the case where the value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280141.png" /> is unknown only when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280142.png" />, by means of the spectral function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280143.png" /> of the stationary stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280144.png" /> in discrete time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280145.png" /> (see [[#References|[2]]], [[#References|[5]]]–[[#References|[6]]]). When used for the extrapolation problem, the same results were obtained for processes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280146.png" /> in continuous time by M.G. Krein and K. Karhunen. N. Wiener [[#References|[8]]] demonstrated that the search for the best approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280147.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280148.png" /> in the case of problems of optimal linear extrapolation and filtering can be reduced to the solution of a certain integral equation of Wiener–Hopf type, or (when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280149.png" /> is discrete) of the discrete analogue of such an equation, which makes it possible to use the factorization method (see [[Wiener–Hopf equation|Wiener–Hopf equation]]; [[Wiener–Hopf method|Wiener–Hopf method]]). Problems of optimal linear extrapolation or filtering of a stationary stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280150.png" /> in continuous time in the case where not all its past values for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280151.png" /> are known but only its values on a finite interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280152.png" />, as well as the problem of optimal linear interpolation of such an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280153.png" />, can be reduced to certain problems of establishing a special form of differential equation (a "generalized string equation" ) by means of its spectrum (see [[#References|[9]]], [[#References|[10]]]).
+
All these problems reduce geometrically to the problem of projecting a point of the Hilbert space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280138.png" /> (or of its extension) orthogonally onto a given subspace of this space. Relying on this geometric interpretation and on the isomorphism of the spaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280139.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280140.png" />, A.N. Kolmogorov has deduced general formulas that make it possible to determine the mean-square error of optimal linear extrapolation or interpolation, corresponding to the case where the value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280141.png" /> is unknown only when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280142.png" />, by means of the spectral function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280143.png" /> of the stationary stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280144.png" /> in discrete time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280145.png" /> (see [[#References|[2]]], [[#References|[5]]]–[[#References|[6]]]). When used for the extrapolation problem, the same results were obtained for processes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280146.png" /> in continuous time by M.G. Krein and K. Karhunen. N. Wiener [[#References|[8]]] demonstrated that the search for the best approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280147.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280148.png" /> in the case of problems of optimal linear extrapolation and filtering can be reduced to the solution of a certain integral equation of Wiener–Hopf type, or (when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280149.png" /> is discrete) of the discrete analogue of such an equation, which makes it possible to use the factorization method (see [[Wiener–Hopf equation|Wiener–Hopf equation]]; [[Wiener–Hopf method|Wiener–Hopf method]]). Problems of optimal linear extrapolation or filtering of a stationary stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280150.png" /> in continuous time in the case where not all its past values for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280151.png" /> are known but only its values on a finite interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280152.png" />, as well as the problem of optimal linear interpolation of such an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280153.png" />, can be reduced to certain problems of establishing a special form of differential equation (a "generalized string equation" ) by means of its spectrum (see [[#References|[9]]], [[#References|[10]]]).
  
 
The above approaches to the solution of problems of optimal linear extrapolation, interpolation and filtering provide sufficiently-simple explicit formulas for the required best approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280154.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280155.png" /> that can be successfully used in practice only in certain exceptional cases. One important case in which such explicit formulas do exist is the case of a stationary stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280156.png" /> with rational spectral density <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280157.png" /> relative to the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280158.png" /> (if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280159.png" /> is discrete) or relative to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280160.png" /> (if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280161.png" /> is continuous), which was studied in detail by Wiener [[#References|[8]]] (for applications to problems of extrapolation and filtering by values where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280162.png" />). It was subsequently demonstrated that for such stationary stochastic processes with rational spectral density there is also an explicit solution of the problems of linear interpolation, extrapolation and filtering by means of data on a finite interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280163.png" /> (see, for example, [[#References|[2]]], [[#References|[11]]]). The simplicity of processes with rational spectral density can be explained by the fact that such stationary stochastic processes (and practically only they) are a one-dimensional component of a multi-dimensional stationary [[Markov process|Markov process]] (see [[#References|[12]]]).
 
The above approaches to the solution of problems of optimal linear extrapolation, interpolation and filtering provide sufficiently-simple explicit formulas for the required best approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280154.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280155.png" /> that can be successfully used in practice only in certain exceptional cases. One important case in which such explicit formulas do exist is the case of a stationary stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280156.png" /> with rational spectral density <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280157.png" /> relative to the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280158.png" /> (if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280159.png" /> is discrete) or relative to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280160.png" /> (if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280161.png" /> is continuous), which was studied in detail by Wiener [[#References|[8]]] (for applications to problems of extrapolation and filtering by values where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280162.png" />). It was subsequently demonstrated that for such stationary stochastic processes with rational spectral density there is also an explicit solution of the problems of linear interpolation, extrapolation and filtering by means of data on a finite interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280163.png" /> (see, for example, [[#References|[2]]], [[#References|[11]]]). The simplicity of processes with rational spectral density can be explained by the fact that such stationary stochastic processes (and practically only they) are a one-dimensional component of a multi-dimensional stationary [[Markov process|Markov process]] (see [[#References|[12]]]).
Line 66: Line 66:
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> E.E. Slutskii, , ''Selected works'' , Moscow (1980) pp. 252–268 (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> Yu.A. Rozanov,   "Stationary random processes" , Holden-Day (1967) (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> H. Cramér,   M.R. Leadbetter,   "Stationary and related stochastic processes" , Wiley (1967)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> A.Ya. Khinchin,   ''Uspekhi Mat. Nauk'' : 5 (1938) pp. 42–51</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> A.N. Kolmogorov,   "Interpolation and extrapolation of stationary stochastic series" ''Izv. Akad. Nauk SSSR Ser. Mat.'' , '''5''' : 1 (1941) pp. 3–14 (In Russian) (German abstract)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> J.L. Doob,   "Stochastic processes" , Wiley (1953)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> I.I. [I.I. Gikhman] Gihman,   A.V. [A.V. Skorokhod] Skorohod,   "The theory of stochastic processes" , '''1''' , Springer (1971) (Translated from Russian)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top"> N. Wiener,   "Extrapolation, interpolation and smoothing of stationary time series" , M.I.T. (1949)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top"> M.G. Krein,   "On a basic approximation problem of the theory of extrapolation and filtering of stationary stochastic processes" ''Dokl. Akad. Nauk SSSR'' , '''94''' : 1 (1954) pp. 13–16 (In Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top"> H. Dym,   H.P. McKean,   "Gaussian processes, function theory, and the inverse spectral problem" , Acad. Press (1976)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top"> A.M. Yaglom,   "Extrapolation, interpolation and filtering of stationary stochastic processes with rational spectral density" ''Trudy Moskov. Mat. Obshch.'' , '''4''' (1955) pp. 333–374 (In Russian)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top"> J.L. Doob,   "The elementary Gaussian processes" ''Ann. Math. Stat.'' , '''15''' (1944) pp. 229–282</TD></TR></table>
+
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> E.E. Slutskii, , ''Selected works'' , Moscow (1980) pp. 252–268 (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> Yu.A. Rozanov, "Stationary random processes" , Holden-Day (1967) (Translated from Russian) {{MR|0214134}} {{ZBL|0152.16302}} </TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> H. Cramér, M.R. Leadbetter, "Stationary and related stochastic processes" , Wiley (1967) {{MR|0217860}} {{ZBL|0162.21102}} </TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> A.Ya. Khinchin, ''Uspekhi Mat. Nauk'' : 5 (1938) pp. 42–51</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> A.N. Kolmogorov, "Interpolation and extrapolation of stationary stochastic series" ''Izv. Akad. Nauk SSSR Ser. Mat.'' , '''5''' : 1 (1941) pp. 3–14 (In Russian) (German abstract)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> J.L. Doob, "Stochastic processes" , Wiley (1953) {{MR|1570654}} {{MR|0058896}} {{ZBL|0053.26802}} </TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> I.I. [I.I. Gikhman] Gihman, A.V. [A.V. Skorokhod] Skorohod, "The theory of stochastic processes" , '''1''' , Springer (1971) (Translated from Russian) {{MR|0636254}} {{MR|0651015}} {{MR|0375463}} {{MR|0346882}} {{ZBL|0531.60002}} {{ZBL|0531.60001}} {{ZBL|0404.60061}} {{ZBL|0305.60027}} {{ZBL|0291.60019}} </TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top"> N. Wiener, "Extrapolation, interpolation and smoothing of stationary time series" , M.I.T. (1949) {{MR|0031213}} {{ZBL|0036.09705}} </TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top"> M.G. Krein, "On a basic approximation problem of the theory of extrapolation and filtering of stationary stochastic processes" ''Dokl. Akad. Nauk SSSR'' , '''94''' : 1 (1954) pp. 13–16 (In Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top"> H. Dym, H.P. McKean, "Gaussian processes, function theory, and the inverse spectral problem" , Acad. Press (1976) {{MR|0448523}} {{ZBL|0327.60029}} </TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top"> A.M. Yaglom, "Extrapolation, interpolation and filtering of stationary stochastic processes with rational spectral density" ''Trudy Moskov. Mat. Obshch.'' , '''4''' (1955) pp. 333–374 (In Russian)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top"> J.L. Doob, "The elementary Gaussian processes" ''Ann. Math. Stat.'' , '''15''' (1944) pp. 229–282 {{MR|0010931}} {{ZBL|0060.28907}} </TD></TR></table>
  
  
Line 73: Line 73:
 
In the English language literature one says ergodic stationary process rather than metrically-transitive stationary process. See also [[Gaussian process|Gaussian process]].
 
In the English language literature one says ergodic stationary process rather than metrically-transitive stationary process. See also [[Gaussian process|Gaussian process]].
  
The terminology concerning the phrases "correlation function" and "covariance function" is not yet completely standardized. In probability theory the following terminology seems just about universally adopted. Given two random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280179.png" /> their [[Covariance|covariance]] is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280180.png" />, their [[Correlation coefficient|correlation coefficient]] is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280181.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280182.png" /> is the variance of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280183.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280184.png" /> (cf. [[Dispersion|Dispersion]]), and there is no special name for the mixed second-order moment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280185.png" />. Correspondly one has the terms covariance function and correlation function, also termed auto-correlation function and auto-covariance function, for the following quantities associated to a stochastic process:
+
The terminology concerning the phrases "correlation function" and "covariance function" is not yet completely standardized. In probability theory the following terminology seems just about universally adopted. Given two random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280179.png" /> their [[Covariance|covariance]] is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280180.png" />, their [[Correlation coefficient|correlation coefficient]] is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280181.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280182.png" /> is the variance of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280183.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280184.png" /> (cf. [[Dispersion|Dispersion]]), and there is no special name for the mixed second-order moment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280185.png" />. Correspondly one has the terms covariance function and correlation function, also termed auto-correlation function and auto-covariance function, for the following quantities associated to a stochastic process:
  
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280186.png" /></td> </tr></table>
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280186.png" /></td> </tr></table>
Line 85: Line 85:
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280190.png" /></td> </tr></table>
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280190.png" /></td> </tr></table>
  
However, in application areas of probability theory a somewhat different terminology is also employed, for instance in oceanology, hydrology and electrical engineering (cf. [[#References|[a1]]]–[[#References|[a4]]]). E.g., one also finds the phrase "correlation function" for the quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280191.png" /> (instead of covariance function). Also, one regularly finds the terminology "correlation function" for the quantity
+
However, in application areas of probability theory a somewhat different terminology is also employed, for instance in oceanology, hydrology and electrical engineering (cf. [[#References|[a1]]]–[[#References|[a4]]]). E.g., one also finds the phrase "correlation function" for the quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280191.png" /> (instead of covariance function). Also, one regularly finds the terminology "correlation function" for the quantity
  
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280192.png" /></td> </tr></table>
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280192.png" /></td> </tr></table>
  
which is also in agreement with the phrase "correlation function" (pair correlation function) as it is used in statistical mechanics, cf. e.g. [[#References|[a5]]].
+
which is also in agreement with the phrase "correlation function" (pair correlation function) as it is used in statistical mechanics, cf. e.g. [[#References|[a5]]].
  
 
If the process is stationary, the distinctions are minor. Thus, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280193.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280194.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280195.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280196.png" />.
 
If the process is stationary, the distinctions are minor. Thus, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280193.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280194.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280195.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s087/s087280/s087280196.png" />.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> A.H. Jazwinski,   "Stochastic processes and filtering theory" , Acad. Press (1970) pp. 53–54</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> Y. Sawaragi,   Y. Sunahara,   T. Nakamizo,   "Statistical decision theory in adaptive control systems" , Acad. Press (1967)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> A.S. Monin,   R.V. Ozmidov,   "Turbulence in the ocean" , Reidel (1985) (Translated from Russian)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> K. Sobczyk,   "Stochastic differential equations. With applications to physics and engineering" , Kluwer (1991) pp. 23</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> C.J. Thompson,   "Mathematical statistical mechanics" , Princeton Univ. Press (1972)</TD></TR></table>
+
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> A.H. Jazwinski, "Stochastic processes and filtering theory" , Acad. Press (1970) pp. 53–54 {{MR|}} {{ZBL|0203.50101}} </TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> Y. Sawaragi, Y. Sunahara, T. Nakamizo, "Statistical decision theory in adaptive control systems" , Acad. Press (1967) {{MR|}} {{ZBL|0189.47003}} </TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> A.S. Monin, R.V. Ozmidov, "Turbulence in the ocean" , Reidel (1985) (Translated from Russian)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> K. Sobczyk, "Stochastic differential equations. With applications to physics and engineering" , Kluwer (1991) pp. 23 {{MR|1135326}} {{ZBL|0762.60050}} </TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> C.J. Thompson, "Mathematical statistical mechanics" , Princeton Univ. Press (1972) {{MR|0469020}} {{ZBL|0244.60082}} </TD></TR></table>

Revision as of 10:32, 27 March 2012

stochastic process, homogeneous in time

A stochastic process whose statistical characteristics do not change in the course of time , i.e. are invariant relative to translations in time: , , for any fixed value of (either a real number or an integer, depending on whether one is dealing with a stochastic process in continuous or in discrete time). The concept of a stationary stochastic process is widely used in applications of probability theory in various areas of natural science and technology, since these processes accurately describe many real phenomena accompanied by unordered fluctuations. For example, the pulsations of the force of a current or the voltage in an electrical chain (electrical "noise" ) can be considered as stationary stochastic processes if the chain is in a stationary system; the pulsations of velocity or pressure at a point of a turbulent flow are stationary stochastic processes if the flow is stationary, etc.

In the mathematical theory of stationary stochastic processes, an important role is played by the moments of the probability distribution of the process , and especially by the moments of the first two orders — the mean value , and its covariance function , or, equivalently, the correlation function . In much of the research into the theory of stationary stochastic processes, the properties that are completely defined by the characteristics and alone are studied (the so-called correlation theory or theory of second-order stationary stochastic processes). Accordingly, the stochastic processes for which and do not depend on are often separated into a special class and are called stationary stochastic processes in the wide sense. The more special stochastic processes, none of whose characteristics change with time (so that the distribution function of an -dimensional random variable depends here, for any , only on the differences ) are called stationary stochastic processes in the strict sense. Accordingly, the theory of stationary stochastic processes is divided into the theory of stationary stochastic processes in the strict sense and in the wide sense, with a different mathematical apparatus used in each.

In the strict sense, the theory can be stated outside the framework of probability theory as the theory of one-parameter groups of transformations of a measure space that preserve the measure; this theory is very close to the general theory of dynamical systems (cf. Dynamical system) and to ergodic theory. The most important general theorem of the theory of stationary stochastic processes in the strict sense is the Birkhoff–Khinchin ergodic theorem, according to which for any stationary stochastic process in the strict sense having a mathematical expectation (i.e. ), the limit

(1)

or

(1a)

exists with probability 1 (formula (1) relates to processes in continuous time, while (1a) relates to processes in discrete time). There is a result of E.E. Slutskii [1], related to stationary stochastic processes in the wide sense, which states that the limit (1) or (1a) exists in mean square. This limit coincides with if and only if

(2)

or

(2a)

where

(the von Neumann (-) ergodic theorem). These conditions are satisfied, in particular, when as . The Birkhoff–Khinchin theorem can be applied to various stationary stochastic processes in the strict sense of the form

where is an arbitrary functional of the stationary stochastic process , and is a random variable which has a mathematical expectation; if for all such stationary stochastic processes the corresponding limit coincides with , then is called a metrically transitive stationary stochastic process. For stationary Gaussian stochastic processes , the condition of being stationary in the strict sense coincides with the condition of being stationary in the wide sense; metric transitivity will occur if and only if the spectral function of is a continuous function of (see, for example, [2], [3]). There are, in general, no simple necessary and sufficient conditions for the metric transitivity of a stationary stochastic process .

Apart from the above result relating to metric transitivity, there are also other results specifically for stationary Gaussian stochastic processes. For these processes, detailed studies have been made of the question of the local properties of the realizations of (i.e. of individual observed values), and of the statistical properties of the sequence of zeros or maxima of the realizations of , and the points of intersection of it with a given level (see, for example, [3]). A typical example of the results related to intersections with a level is the statement that, given broad regularity conditions, the set of points of intersection of a high level with the stationary Gaussian stochastic process in a certain special time scale (dependent on and tending rapidly to infinity when ) converges to a Poisson flow of events of unit intensity when (see [3]).

When studying stationary stochastic processes in the wide sense, the Hilbert space of linear combinations of values of the process and the mean-square limits of sequences of such linear combinations are examined, and a scalar product is defined in it by the formula . In this case, the transformation , where is a fixed number, will generate a linear unitary operator mapping the space onto itself; the family of operators clearly satisfies the condition , while the values form a set of points (a curve if the time is continuous, and a countable sequence of points if the time is discrete), mapped onto itself by all operators . Accordingly, the theory of stationary stochastic processes in the wide sense can be reformulated in terms of functional analysis as the study of the sets of points of the Hilbert space , where is a family of linear unitary operators such that (cf. also Semi-group of operators).

Spectral considerations, based on the expansion of the stochastic process and its correlation function into a Fourier–Stieltjes integral, are central to the theory of stationary stochastic processes in the wide sense. By Khinchin's theorem [4] (which is a simple consequence of Bochner's analytic theorem on the general form of a positive-definite function), the correlation function of a stationary stochastic process in continuous time can always be represented in the form

(3)

where is a bounded monotone non-decreasing function of , while ; the Herglotz theorem on the general form of positive-definite sequences similarly shows that the same representation, but with , also holds for the correlation function of a stationary stochastic process in discrete time. If the correlation function decreases sufficiently rapidly as (as is most often the case in applications under the condition that by one understands the difference , i.e. it is considered that ), then the integral at the right-hand side of (3) becomes an ordinary Fourier integral

(4)

where is a non-negative function. The function is called the spectral function of , while the function (in cases where the equality (4) holds) is called its spectral density. Starting from the Khinchin formula (3) (or from the definition of the process in the form of the set of points in the Hilbert space , and Stone's theorem on the spectral representation of one-parameter groups of unitary operators in a Hilbert space), it can also be demonstrated that the process itself permits a spectral decomposition in the form

(5)

where is a random function with uncorrelated increments (i.e. when ) which satisfies the condition , while the integral at the right-hand side is understood to be the mean-square limit of the corresponding sequence of integral sums. The decomposition (5) provides grounds for considering any stationary stochastic process in the wide sense as a superposition of a set of non-correlated harmonic oscillations of different frequencies with random amplitudes and phases; the spectral function and the spectral density define the distribution of the average energy (or, more accurately, of the power) of the harmonic oscillations with frequency spectrum that constitute (as a result of which the function in applied research is often called the energy spectrum, or power spectrum, of ).

The spectral decomposition of the correlation function , defined by formula (3), demonstrates that the mapping , which maps elements of the Hilbert space to elements of the Hilbert space of complex-valued functions on the set with a modulus, square-integrable with respect to , is an isometric mapping of into . This mapping can be extended to an isometric linear mapping of the whole space onto the space , a fact that allows one to reformulate many problems in the theory of stationary stochastic processes in the wide sense as problems in function theory.

A significant part of the theory of stationary stochastic processes in the wide sense is devoted to methods of solving linear approximation problems for such processes, i.e. methods of locating a linear combination of any "known" values of that best approximates (in the sense of the minimum least-square error) a certain "unknown" value of the same process or any "unknown" random variable . In particular, the problem of optimal linear extrapolation of consists of finding the best approximation of the value , , that linearly depends on the "past values" of with ; the problem of optimal linear interpolation consists of finding the best approximation for that linearly depends on the values of , where runs through all values that do not belong to a specific interval of the time axis (to which does belong); the problem of optimal linear filtering can be formulated as the problem of finding the best approximation for a certain random variable (which is usually the value for some of a stationary stochastic process , correlated with , whereby most often plays the part of a "signal" , while is the sum of the "signal" and a "noise" that interferes with it, and the sum is known from the observations) that linearly depends on the values of when (see Stochastic processes, prediction of; Stochastic processes, filtering of; Stochastic processes, interpolation of).

All these problems reduce geometrically to the problem of projecting a point of the Hilbert space (or of its extension) orthogonally onto a given subspace of this space. Relying on this geometric interpretation and on the isomorphism of the spaces and , A.N. Kolmogorov has deduced general formulas that make it possible to determine the mean-square error of optimal linear extrapolation or interpolation, corresponding to the case where the value of is unknown only when , by means of the spectral function of the stationary stochastic process in discrete time (see [2], [5][6]). When used for the extrapolation problem, the same results were obtained for processes in continuous time by M.G. Krein and K. Karhunen. N. Wiener [8] demonstrated that the search for the best approximation or in the case of problems of optimal linear extrapolation and filtering can be reduced to the solution of a certain integral equation of Wiener–Hopf type, or (when is discrete) of the discrete analogue of such an equation, which makes it possible to use the factorization method (see Wiener–Hopf equation; Wiener–Hopf method). Problems of optimal linear extrapolation or filtering of a stationary stochastic process in continuous time in the case where not all its past values for are known but only its values on a finite interval , as well as the problem of optimal linear interpolation of such an , can be reduced to certain problems of establishing a special form of differential equation (a "generalized string equation" ) by means of its spectrum (see [9], [10]).

The above approaches to the solution of problems of optimal linear extrapolation, interpolation and filtering provide sufficiently-simple explicit formulas for the required best approximation or that can be successfully used in practice only in certain exceptional cases. One important case in which such explicit formulas do exist is the case of a stationary stochastic process with rational spectral density relative to the (if is discrete) or relative to (if is continuous), which was studied in detail by Wiener [8] (for applications to problems of extrapolation and filtering by values where ). It was subsequently demonstrated that for such stationary stochastic processes with rational spectral density there is also an explicit solution of the problems of linear interpolation, extrapolation and filtering by means of data on a finite interval (see, for example, [2], [11]). The simplicity of processes with rational spectral density can be explained by the fact that such stationary stochastic processes (and practically only they) are a one-dimensional component of a multi-dimensional stationary Markov process (see [12]).

The concept of a stationary stochastic process permits a whole series of generalizations. One of these is the concept of a generalized stationary stochastic process. This is a generalized stochastic process (cf. Stochastic process, generalized) (i.e. a random linear functional, defined on the space of infinitely-differentiable functions with compact support), such that either the distribution function of the random vector , where for any positive integer , real number and , coincides with the probability distribution of the vector (a generalized stationary stochastic process in the strict sense), or else

for all (a generalized stationary stochastic process in the wide sense). A generalized stationary stochastic process in the wide sense and its correlation functional (or covariance functional ) permit a spectral decomposition related to (2) and (5) (see Spectral decomposition of a random function). Other frequently-used generalizations of the concept of a stationary stochastic process are the concepts of a stochastic process with stationary increments of a certain order and of a homogeneous random field (cf. Random field, homogeneous).

References

[1] E.E. Slutskii, , Selected works , Moscow (1980) pp. 252–268 (In Russian)
[2] Yu.A. Rozanov, "Stationary random processes" , Holden-Day (1967) (Translated from Russian) MR0214134 Zbl 0152.16302
[3] H. Cramér, M.R. Leadbetter, "Stationary and related stochastic processes" , Wiley (1967) MR0217860 Zbl 0162.21102
[4] A.Ya. Khinchin, Uspekhi Mat. Nauk : 5 (1938) pp. 42–51
[5] A.N. Kolmogorov, "Interpolation and extrapolation of stationary stochastic series" Izv. Akad. Nauk SSSR Ser. Mat. , 5 : 1 (1941) pp. 3–14 (In Russian) (German abstract)
[6] J.L. Doob, "Stochastic processes" , Wiley (1953) MR1570654 MR0058896 Zbl 0053.26802
[7] I.I. [I.I. Gikhman] Gihman, A.V. [A.V. Skorokhod] Skorohod, "The theory of stochastic processes" , 1 , Springer (1971) (Translated from Russian) MR0636254 MR0651015 MR0375463 MR0346882 Zbl 0531.60002 Zbl 0531.60001 Zbl 0404.60061 Zbl 0305.60027 Zbl 0291.60019
[8] N. Wiener, "Extrapolation, interpolation and smoothing of stationary time series" , M.I.T. (1949) MR0031213 Zbl 0036.09705
[9] M.G. Krein, "On a basic approximation problem of the theory of extrapolation and filtering of stationary stochastic processes" Dokl. Akad. Nauk SSSR , 94 : 1 (1954) pp. 13–16 (In Russian)
[10] H. Dym, H.P. McKean, "Gaussian processes, function theory, and the inverse spectral problem" , Acad. Press (1976) MR0448523 Zbl 0327.60029
[11] A.M. Yaglom, "Extrapolation, interpolation and filtering of stationary stochastic processes with rational spectral density" Trudy Moskov. Mat. Obshch. , 4 (1955) pp. 333–374 (In Russian)
[12] J.L. Doob, "The elementary Gaussian processes" Ann. Math. Stat. , 15 (1944) pp. 229–282 MR0010931 Zbl 0060.28907


Comments

In the English language literature one says ergodic stationary process rather than metrically-transitive stationary process. See also Gaussian process.

The terminology concerning the phrases "correlation function" and "covariance function" is not yet completely standardized. In probability theory the following terminology seems just about universally adopted. Given two random variables their covariance is , their correlation coefficient is , where is the variance of , (cf. Dispersion), and there is no special name for the mixed second-order moment . Correspondly one has the terms covariance function and correlation function, also termed auto-correlation function and auto-covariance function, for the following quantities associated to a stochastic process:

and for two stochastic processes there are correspondingly the cross covariance function and cross correlation function

However, in application areas of probability theory a somewhat different terminology is also employed, for instance in oceanology, hydrology and electrical engineering (cf. [a1][a4]). E.g., one also finds the phrase "correlation function" for the quantity (instead of covariance function). Also, one regularly finds the terminology "correlation function" for the quantity

which is also in agreement with the phrase "correlation function" (pair correlation function) as it is used in statistical mechanics, cf. e.g. [a5].

If the process is stationary, the distinctions are minor. Thus, if , , , .

References

[a1] A.H. Jazwinski, "Stochastic processes and filtering theory" , Acad. Press (1970) pp. 53–54 Zbl 0203.50101
[a2] Y. Sawaragi, Y. Sunahara, T. Nakamizo, "Statistical decision theory in adaptive control systems" , Acad. Press (1967) Zbl 0189.47003
[a3] A.S. Monin, R.V. Ozmidov, "Turbulence in the ocean" , Reidel (1985) (Translated from Russian)
[a4] K. Sobczyk, "Stochastic differential equations. With applications to physics and engineering" , Kluwer (1991) pp. 23 MR1135326 Zbl 0762.60050
[a5] C.J. Thompson, "Mathematical statistical mechanics" , Princeton Univ. Press (1972) MR0469020 Zbl 0244.60082
How to Cite This Entry:
Stationary stochastic process. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Stationary_stochastic_process&oldid=12216
This article was adapted from an original article by A.M. Yaglom (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article