Namespaces
Variants
Actions

Difference between revisions of "Moment matrix"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (fix tex)
 
(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
A [[Matrix|matrix]] containing the moments of a [[Probability distribution|probability distribution]] (cf. also [[Moment|Moment]]; [[Moments, method of (in probability theory)|Moments, method of (in probability theory)]]). For example, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m1301901.png" /> is a probability distribution on a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m1301902.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m1301903.png" /> is its <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m1301904.png" />th order moment. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m1301905.png" /> and thus the moments are given, then a [[Linear functional|linear functional]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m1301906.png" /> is defined on the set of polynomials by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m1301907.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m1301908.png" />. The inverse problem is called a moment problem (cf. also [[Moment problem|Moment problem]]): Given the sequence of moments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m1301909.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019010.png" />, find the necessary and sufficient conditions for the existence of and an expression for a positive distribution (a non-decreasing function with possibly infinitely many points of increase) that gives the integral representation of that linear functional. A positive distribution can only exist if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019011.png" /> for any polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019012.png" /> that is positive on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019013.png" />.
+
<!--This article has been texified automatically. Since there was no Nroff source code for this article,  
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct and if all png images have been replaced by TeX code, please remove this message and the {{TEX|semi-auto}} category.
  
For the Hamburger moment problem (cf. also [[Complex moment problem, truncated|Complex moment problem, truncated]]), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019014.png" /> is the real axis and the polynomials are real, so the functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019015.png" /> is positive if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019016.png" /> for any non-zero polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019017.png" /> and this implies that the moment matrices, i.e., the Hankel matrices of the moment sequence, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019018.png" />, are positive definite for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019019.png" /> (cf. also [[Hankel matrix|Hankel matrix]]). This is a necessary and sufficient condition for the existence of a solution.
+
Out of 50 formulas, 49 were replaced by TEX code.-->
  
For the trigonometric moment problem, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019020.png" /> is the unit circle in the complex plane and the polynomials are complex, so that  "positive definite"  here means that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019021.png" /> for all non-zero polynomials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019022.png" />. The linear functional is automatically defined on the space of Laurent polynomials (cf. also [[Laurent series|Laurent series]]) since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019023.png" />. Positive definite now corresponds to the Toeplitz moment matrices <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019024.png" /> being positive definite for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019025.png" /> (cf. also [[Toeplitz matrix|Toeplitz matrix]]). Again this is the necessary and sufficient condition for the existence of a (unique) solution to the moment problem.
+
{{TEX|semi-auto}}{{TEX|done}}
 +
A [[Matrix|matrix]] containing the moments of a [[Probability distribution|probability distribution]] (cf. also [[Moment|Moment]]; [[Moments, method of (in probability theory)|Moments, method of (in probability theory)]]). For example, if $\psi$ is a probability distribution on a set $I \subset \mathbf{C}$, then $m _ { k } = \int _ { I } x ^ { k } d \psi ( x )$ is its $k$th order moment. If $\psi$ and thus the moments are given, then a [[Linear functional|linear functional]] $L$ is defined on the set of polynomials by $L ( x ^ { k } ) = m _ { k }$, $k = 0,1 , \ldots$. The inverse problem is called a moment problem (cf. also [[Moment problem|Moment problem]]): Given the sequence of moments $m_k$, $k = 0,1 , \ldots$, find the necessary and sufficient conditions for the existence of and an expression for a positive distribution (a non-decreasing function with possibly infinitely many points of increase) that gives the integral representation of that linear functional. A positive distribution can only exist if $L ( p ) > 0$ for any polynomial $p$ that is positive on $I$.
  
Once the positive-definite linear functional is given, one can define an [[Inner product|inner product]] on the space of polynomials as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019026.png" /> in the real case or as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019027.png" /> in the complex case. The moment matrix is then the [[Gram matrix|Gram matrix]] for the standard basis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019028.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019029.png" />.
+
For the Hamburger moment problem (cf. also [[Complex moment problem, truncated|Complex moment problem, truncated]]), $I$ is the real axis and the polynomials are real, so the functional $L$ is positive if $L ( p ^ { 2 } ( x ) ) > 0$ for any non-zero polynomial $p$ and this implies that the moment matrices, i.e., the Hankel matrices of the moment sequence, $M _ { n } = [ m _ { i+j }] _ { i , j = 0 } ^ { n }$, are positive definite for all $n = 0,1 , \dots$ (cf. also [[Hankel matrix|Hankel matrix]]). This is a necessary and sufficient condition for the existence of a solution.
  
Generalized moments correspond to the use of non-standard basis functions for the polynomials or for possibly other spaces. Consider a set of basis functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019030.png" /> that span the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019031.png" />. The modified or generalized moments are then given by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019032.png" />. The moment problem is to find a positive distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019033.png" /> that gives an integral representation of the linear functional on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019034.png" />. However, to define an inner product, one needs the functional to be defined on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019035.png" /> (in the real case) or on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019036.png" /> (in the complex case). This requires a doubly indexed sequence of "moments"  <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019037.png" />. Finding a distribution for an integral representation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019038.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019039.png" /> is called a strong moment problem.
+
For the trigonometric moment problem, $I$ is the unit circle in the complex plane and the polynomials are complex, so that  "positive definite" here means that $L ( | p ( z ) | ^ { 2 } ) > 0$ for all non-zero polynomials $p$. The linear functional is automatically defined on the space of Laurent polynomials (cf. also [[Laurent series|Laurent series]]) since $m _ { - k } = L ( z ^ { - k } ) = \overline { L ( z ^ { k } ) } = \overline { m } _ { k }$. Positive definite now corresponds to the Toeplitz moment matrices $M _ { n } = [ m _ { i - j} ] _ { i ,\, j = 0 } ^ { n }$ being positive definite for all $n = 0,1,2 , \dots$ (cf. also [[Toeplitz matrix|Toeplitz matrix]]). Again this is the necessary and sufficient condition for the existence of a (unique) solution to the moment problem.
  
The solution of moment problems is often obtained using an orthogonal basis. If the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019040.png" /> are orthonormalized to give the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019041.png" />, then the moment matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019042.png" /> can be used to give explicit expressions; namely <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019043.png" /> where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019044.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019045.png" /> and for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019046.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019047.png" /> with
+
Once the positive-definite linear functional is given, one can define an [[Inner product|inner product]] on the space of polynomials as $\langle f , g \rangle = L ( f ( x ) g ( x ) )$ in the real case or as $\langle f , g \rangle = L ( f ( z ) \overline { g ( z ) } )$ in the complex case. The moment matrix is then the [[Gram matrix|Gram matrix]] for the standard basis $m _ { i  + j} = \langle x ^ { i } , x ^ { j } \rangle$ or $m _ { i -j } = \langle x ^ { i } , x ^ { j } \rangle$.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019048.png" /></td> </tr></table>
+
Generalized moments correspond to the use of non-standard basis functions for the polynomials or for possibly other spaces. Consider a set of basis functions $f _ { 0 } , f _ { 1 } , \dots$ that span the space $\mathcal{L}$. The modified or generalized moments are then given by $m _ { k } = L ( f _ { k } )$. The moment problem is to find a positive distribution function $\psi$ that gives an integral representation of the linear functional on $\mathcal{L}$. However, to define an inner product, one needs the functional to be defined on $\mathcal{R} = \mathcal{L}. \mathcal{L}$ (in the real case) or on $\mathcal{R} = \mathcal{L}. \overline { \mathcal{L} }$ (in the complex case). This requires a doubly indexed sequence of  "moments" $m _ { i j } = \langle f _ { i } , f _ { j } \rangle$. Finding a distribution for an integral representation of $L$ on $\mathcal{R}$ is called a strong moment problem.
  
The leading coefficient in the expansion <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019049.png" /> satisfies <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m130/m130190/m13019050.png" />.
+
The solution of moment problems is often obtained using an orthogonal basis. If the $f _ { k }$ are orthonormalized to give the functions $\phi _ { 0 } , \phi _ { 1 } , \ldots$, then the moment matrix $M _ { n } = [ m _ { i j } ] _ { i , j = 0 } ^ { n }$ can be used to give explicit expressions; namely $\phi _ { n } ( z ) = M _ { n } ( z ) / \sqrt { \mathcal{M} _ { n  - 1}  \mathcal{M} _ { n }} $ where $\mathcal{M} _ { - 1 } = 0$, $\mathcal{M} _ { 0 } ( z ) = f _ { 0 } ( z )$ and for $n \geq 1$, ${\cal M} _ { n } = \operatorname { det } M _ { n }$ with
 +
 
 +
\begin{equation*} M _ { n } ( z ) = \left( \begin{array} { c c c } { \langle f _ { 0 } , f _ { 0 } \rangle } &amp; { \dots } &amp; { \langle f _ { 0 } , f _ { n } \rangle } \\ { \vdots } &amp; { \square } &amp; { \vdots } \\ { \langle f _ { n - 1 } , f _ { 0 } \rangle } &amp; { \dots } &amp; { \langle f _ { n - 1 } , f _ { n } \rangle } \\ { f _ { 0 } ( z ) } &amp; { \dots } &amp; { f _ { n } ( z ) } \end{array} \right). \end{equation*}
 +
 
 +
The leading coefficient in the expansion $\phi _ { n } ( z ) = \kappa _ { n } f _ { n } ( z ) +\dots $ satisfies $| \kappa _ { n } | ^ { 2 } = {\cal M} _ { n - 1 } / {\cal M} _ { n }$.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  N.I. Akhiezer,  "The classical moment problem" , Oliver &amp; Boyd  (1969)  (In Russian)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  J.A. Shohat,  J.D. Tamarkin,  "The problem of moments" , ''Math. Surveys'' , '''1''' , Amer. Math. Soc.  (1943)  (In Russian)</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  N.I. Akhiezer,  "The classical moment problem" , Oliver &amp; Boyd  (1969)  (In Russian)</td></tr><tr><td valign="top">[a2]</td> <td valign="top">  J.A. Shohat,  J.D. Tamarkin,  "The problem of moments" , ''Math. Surveys'' , '''1''' , Amer. Math. Soc.  (1943)  (In Russian)</td></tr></table>

Latest revision as of 18:32, 21 December 2020

A matrix containing the moments of a probability distribution (cf. also Moment; Moments, method of (in probability theory)). For example, if $\psi$ is a probability distribution on a set $I \subset \mathbf{C}$, then $m _ { k } = \int _ { I } x ^ { k } d \psi ( x )$ is its $k$th order moment. If $\psi$ and thus the moments are given, then a linear functional $L$ is defined on the set of polynomials by $L ( x ^ { k } ) = m _ { k }$, $k = 0,1 , \ldots$. The inverse problem is called a moment problem (cf. also Moment problem): Given the sequence of moments $m_k$, $k = 0,1 , \ldots$, find the necessary and sufficient conditions for the existence of and an expression for a positive distribution (a non-decreasing function with possibly infinitely many points of increase) that gives the integral representation of that linear functional. A positive distribution can only exist if $L ( p ) > 0$ for any polynomial $p$ that is positive on $I$.

For the Hamburger moment problem (cf. also Complex moment problem, truncated), $I$ is the real axis and the polynomials are real, so the functional $L$ is positive if $L ( p ^ { 2 } ( x ) ) > 0$ for any non-zero polynomial $p$ and this implies that the moment matrices, i.e., the Hankel matrices of the moment sequence, $M _ { n } = [ m _ { i+j }] _ { i , j = 0 } ^ { n }$, are positive definite for all $n = 0,1 , \dots$ (cf. also Hankel matrix). This is a necessary and sufficient condition for the existence of a solution.

For the trigonometric moment problem, $I$ is the unit circle in the complex plane and the polynomials are complex, so that "positive definite" here means that $L ( | p ( z ) | ^ { 2 } ) > 0$ for all non-zero polynomials $p$. The linear functional is automatically defined on the space of Laurent polynomials (cf. also Laurent series) since $m _ { - k } = L ( z ^ { - k } ) = \overline { L ( z ^ { k } ) } = \overline { m } _ { k }$. Positive definite now corresponds to the Toeplitz moment matrices $M _ { n } = [ m _ { i - j} ] _ { i ,\, j = 0 } ^ { n }$ being positive definite for all $n = 0,1,2 , \dots$ (cf. also Toeplitz matrix). Again this is the necessary and sufficient condition for the existence of a (unique) solution to the moment problem.

Once the positive-definite linear functional is given, one can define an inner product on the space of polynomials as $\langle f , g \rangle = L ( f ( x ) g ( x ) )$ in the real case or as $\langle f , g \rangle = L ( f ( z ) \overline { g ( z ) } )$ in the complex case. The moment matrix is then the Gram matrix for the standard basis $m _ { i + j} = \langle x ^ { i } , x ^ { j } \rangle$ or $m _ { i -j } = \langle x ^ { i } , x ^ { j } \rangle$.

Generalized moments correspond to the use of non-standard basis functions for the polynomials or for possibly other spaces. Consider a set of basis functions $f _ { 0 } , f _ { 1 } , \dots$ that span the space $\mathcal{L}$. The modified or generalized moments are then given by $m _ { k } = L ( f _ { k } )$. The moment problem is to find a positive distribution function $\psi$ that gives an integral representation of the linear functional on $\mathcal{L}$. However, to define an inner product, one needs the functional to be defined on $\mathcal{R} = \mathcal{L}. \mathcal{L}$ (in the real case) or on $\mathcal{R} = \mathcal{L}. \overline { \mathcal{L} }$ (in the complex case). This requires a doubly indexed sequence of "moments" $m _ { i j } = \langle f _ { i } , f _ { j } \rangle$. Finding a distribution for an integral representation of $L$ on $\mathcal{R}$ is called a strong moment problem.

The solution of moment problems is often obtained using an orthogonal basis. If the $f _ { k }$ are orthonormalized to give the functions $\phi _ { 0 } , \phi _ { 1 } , \ldots$, then the moment matrix $M _ { n } = [ m _ { i j } ] _ { i , j = 0 } ^ { n }$ can be used to give explicit expressions; namely $\phi _ { n } ( z ) = M _ { n } ( z ) / \sqrt { \mathcal{M} _ { n - 1} \mathcal{M} _ { n }} $ where $\mathcal{M} _ { - 1 } = 0$, $\mathcal{M} _ { 0 } ( z ) = f _ { 0 } ( z )$ and for $n \geq 1$, ${\cal M} _ { n } = \operatorname { det } M _ { n }$ with

\begin{equation*} M _ { n } ( z ) = \left( \begin{array} { c c c } { \langle f _ { 0 } , f _ { 0 } \rangle } & { \dots } & { \langle f _ { 0 } , f _ { n } \rangle } \\ { \vdots } & { \square } & { \vdots } \\ { \langle f _ { n - 1 } , f _ { 0 } \rangle } & { \dots } & { \langle f _ { n - 1 } , f _ { n } \rangle } \\ { f _ { 0 } ( z ) } & { \dots } & { f _ { n } ( z ) } \end{array} \right). \end{equation*}

The leading coefficient in the expansion $\phi _ { n } ( z ) = \kappa _ { n } f _ { n } ( z ) +\dots $ satisfies $| \kappa _ { n } | ^ { 2 } = {\cal M} _ { n - 1 } / {\cal M} _ { n }$.

References

[a1] N.I. Akhiezer, "The classical moment problem" , Oliver & Boyd (1969) (In Russian)
[a2] J.A. Shohat, J.D. Tamarkin, "The problem of moments" , Math. Surveys , 1 , Amer. Math. Soc. (1943) (In Russian)
How to Cite This Entry:
Moment matrix. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Moment_matrix&oldid=15678
This article was adapted from an original article by A. Bultheel (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article