Namespaces
Variants
Actions

Difference between revisions of "Partial correlation coefficient"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex,msc,mr,zbl)
 
Line 1: Line 1:
A measure of the linear dependence of a pair of random variables from a collection of random variables in the case where the influence of the remaining variables is eliminated. More precisely, suppose that the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p0716101.png" /> have a joint distribution in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p0716102.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p0716103.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p0716104.png" /> be the best linear approximations to the variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p0716105.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p0716106.png" /> based on the variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p0716107.png" />. Then the partial correlation coefficient between <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p0716108.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p0716109.png" />, denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161010.png" />, is defined as the ordinary correlation coefficient between the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161011.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161012.png" />:
+
{{MSC|62}}
 +
{{TEX|done}}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161013.png" /></td> </tr></table>
+
A partial correlation coefficient is
 +
a measure of the linear dependence of a pair of random variables from a collection of random variables in the case where the influence of the remaining variables is eliminated. More precisely, suppose that the random variables $X_1,\dots,X_n$ have a joint distribution in $\R^n$, and let $X^*_{1;3\dots n}$, $X^*_{2;3\dots n}$ be the best linear approximations to the variables $X_1$ and $X_2$ based on the variables $X_3,\dots,X_n$. Then the partial correlation coefficient between $X_1$ and $X_2$, denoted by $\rho_{12;3\dots n}$, is defined as the ordinary correlation coefficient between the random variables $Y_1 = X_1 - X^*_{1;3\dots n}$ and $Y_2 = X_2 - X^*_{2;3\dots n}$:
  
It follows from the definition that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161014.png" />. The partial correlation coefficient can be expressed in terms of the entries of the [[Correlation matrix|correlation matrix]]. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161015.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161016.png" /> is the correlation coefficient between <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161017.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161018.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161019.png" /> be the cofactor of the element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161020.png" /> in the determinant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161021.png" />; then
+
$$\rho_{12;3\dots n} = \frac{\mathrm{E}\{(Y_1- \mathrm{E}Y_1)(Y_2- \mathrm{E}Y_2)\}}{\sqrt{\mathrm{D}Y_1\mathrm{D}Y_2}}.$$
 +
It follows from the definition that $-1 \le \rho_{12;3\dots n}\le 1$. The partial correlation coefficient can be expressed in terms of the entries of the
 +
[[Correlation matrix|correlation matrix]]. Let $P=\|\rho_{ij}\|$, where $\rho_{ij}$ is the correlation coefficient between $X_i$ and $X_j$, and let $P_{ij}$ be the cofactor of the element $\rho_{ij}$ in the determinant $|P|$; then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161022.png" /></td> </tr></table>
+
$$\rho_{12;3\dots n} = - \frac{P_{12}}{\sqrt{P_{11} P_{22}}}.$$
 +
For example, for $n=3$,
  
For example, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161023.png" />,
+
$$\rho_{12;3} = - \frac{\rho_{12}\rho_{33} - \rho_{13}\rho_{23}}{\sqrt{(1-\rho_{13}^2)(1-\rho_{23}^2)}}.$$
 +
The partial correlation coefficient of any two variables $X_i,\; X_j$ from $X_1,\dots,X_n$ is defined analogously. In general, the partial correlation coefficient $\rho_{12;3\dots n}$ is different from the (ordinary)
 +
[[Correlation coefficient|correlation coefficient]] $\rho_{12}$ of $X_1$ and $X_2$. The difference between $\rho_{12;3\dots n}$ and $\rho_{12}$ indicates whether $X_1$ and $X_2$ are dependent, or whether the dependence between them is a consequence of the dependence of each of them on $X_3,\dots,X_n$. If the variables $X_1,\dots,X_n$ are pairwise uncorrelated, then all partial correlation coefficients are zero.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161024.png" /></td> </tr></table>
+
The empirical analogue of the partial correlation coefficient $\rho_{12;3\dots n}$, the empirical partial correlation coefficient or sample partial correlation coefficient is the statistic
  
The partial correlation coefficient of any two variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161025.png" /> from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161026.png" /> is defined analogously. In general, the partial correlation coefficient <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161027.png" /> is different from the (ordinary) [[Correlation coefficient|correlation coefficient]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161028.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161029.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161030.png" />. The difference between <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161031.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161032.png" /> indicates whether <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161033.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161034.png" /> are dependent, or whether the dependence between them is a consequence of the dependence of each of them on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161035.png" />. If the variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161036.png" /> are pairwise uncorrelated, then all partial correlation coefficients are zero.
+
$$r_{12;3\dots n} = - \frac{R_{12}}{\sqrt{R_{11}R_{22}}},$$
 +
where $R_{ij}$ is the cofactor in the determinant of the matrix $R=\|r_{ij}\|$ of the empirical correlation coefficients $r_{ij}$. If the results of the observations are independent and multivariate normally distributed, and $\rho_{12;3\dots n}$, then $r_{12;3\dots n}$ is distributed according to the probability density
  
The empirical analogue of the partial correlation coefficient <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161037.png" />, the empirical partial correlation coefficient or sample partial correlation coefficient is the statistic
+
$$\frac{1}{\sqrt{\pi}} \frac{\Gamma((N-n+1)/2)}{\Gamma((N-n)/2)}(1-x^2)^{(N-n-2)/2}, \quad -1<x<1$$
 
+
($N$ is the sample size). To test hypotheses about partial correlation coefficients, one uses the fact that the statistic
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161038.png" /></td> </tr></table>
 
 
 
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161039.png" /> is the cofactor in the determinant of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161040.png" /> of the empirical correlation coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161041.png" />. If the results of the observations are independent and multivariate normally distributed, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161042.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161043.png" /> is distributed according to the probability density
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161044.png" /></td> </tr></table>
 
 
 
(<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161045.png" /> is the sample size). To test hypotheses about partial correlation coefficients, one uses the fact that the statistic
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161046.png" /></td> </tr></table>
 
 
 
has, under the stated conditions, a [[Student distribution|Student distribution]] with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071610/p07161047.png" /> degrees of freedom.
 
 
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  H. Cramér,  "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  M.G. Kendall,  A. Stuart,  "The advanced theory of statistics" , '''2. Inference and relationship''' , Griffin  (1979)</TD></TR></table>
 
 
 
 
 
 
 
====Comments====
 
  
 +
$$t=\sqrt{N-n}\frac{r}{\sqrt{1-r^2}},\quad \textrm{where}\ r = r_{12;3\dots n},$$
 +
has, under the stated conditions, a
 +
[[Student distribution|Student distribution]] with $N-n$ degrees of freedom.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> R.J. Muirhead,  "Aspects of multivariate statistical theory" , Wiley  (1982)</TD></TR></table>
+
{|
 +
|-
 +
|valign="top"|{{Ref|Cr}}||valign="top"|  H. Cramér,  "Mathematical methods of statistics", Princeton Univ. Press  (1946)  {{MR|0016588}}  {{ZBL|0063.01014}}
 +
|-
 +
|valign="top"|{{Ref|KeSt}}||valign="top"|  M.G. Kendall,  A. Stuart,  "The advanced theory of statistics", '''2. Inference and relationship''', Griffin  (1979) {{MR|0474561}} {{MR|0243648}} {{ZBL|0416.62001}}
 +
|-
 +
|valign="top"|{{Ref|Mu}}||valign="top"| R.J. Muirhead,  "Aspects of multivariate statistical theory", Wiley  (1982) {{MR|0652932}}  {{ZBL|0556.62028}}
 +
|-
 +
|}

Latest revision as of 20:05, 6 April 2012

2020 Mathematics Subject Classification: Primary: 62-XX [MSN][ZBL]

A partial correlation coefficient is a measure of the linear dependence of a pair of random variables from a collection of random variables in the case where the influence of the remaining variables is eliminated. More precisely, suppose that the random variables $X_1,\dots,X_n$ have a joint distribution in $\R^n$, and let $X^*_{1;3\dots n}$, $X^*_{2;3\dots n}$ be the best linear approximations to the variables $X_1$ and $X_2$ based on the variables $X_3,\dots,X_n$. Then the partial correlation coefficient between $X_1$ and $X_2$, denoted by $\rho_{12;3\dots n}$, is defined as the ordinary correlation coefficient between the random variables $Y_1 = X_1 - X^*_{1;3\dots n}$ and $Y_2 = X_2 - X^*_{2;3\dots n}$:

$$\rho_{12;3\dots n} = \frac{\mathrm{E}\{(Y_1- \mathrm{E}Y_1)(Y_2- \mathrm{E}Y_2)\}}{\sqrt{\mathrm{D}Y_1\mathrm{D}Y_2}}.$$ It follows from the definition that $-1 \le \rho_{12;3\dots n}\le 1$. The partial correlation coefficient can be expressed in terms of the entries of the correlation matrix. Let $P=\|\rho_{ij}\|$, where $\rho_{ij}$ is the correlation coefficient between $X_i$ and $X_j$, and let $P_{ij}$ be the cofactor of the element $\rho_{ij}$ in the determinant $|P|$; then

$$\rho_{12;3\dots n} = - \frac{P_{12}}{\sqrt{P_{11} P_{22}}}.$$ For example, for $n=3$,

$$\rho_{12;3} = - \frac{\rho_{12}\rho_{33} - \rho_{13}\rho_{23}}{\sqrt{(1-\rho_{13}^2)(1-\rho_{23}^2)}}.$$ The partial correlation coefficient of any two variables $X_i,\; X_j$ from $X_1,\dots,X_n$ is defined analogously. In general, the partial correlation coefficient $\rho_{12;3\dots n}$ is different from the (ordinary) correlation coefficient $\rho_{12}$ of $X_1$ and $X_2$. The difference between $\rho_{12;3\dots n}$ and $\rho_{12}$ indicates whether $X_1$ and $X_2$ are dependent, or whether the dependence between them is a consequence of the dependence of each of them on $X_3,\dots,X_n$. If the variables $X_1,\dots,X_n$ are pairwise uncorrelated, then all partial correlation coefficients are zero.

The empirical analogue of the partial correlation coefficient $\rho_{12;3\dots n}$, the empirical partial correlation coefficient or sample partial correlation coefficient is the statistic

$$r_{12;3\dots n} = - \frac{R_{12}}{\sqrt{R_{11}R_{22}}},$$ where $R_{ij}$ is the cofactor in the determinant of the matrix $R=\|r_{ij}\|$ of the empirical correlation coefficients $r_{ij}$. If the results of the observations are independent and multivariate normally distributed, and $\rho_{12;3\dots n}$, then $r_{12;3\dots n}$ is distributed according to the probability density

$$\frac{1}{\sqrt{\pi}} \frac{\Gamma((N-n+1)/2)}{\Gamma((N-n)/2)}(1-x^2)^{(N-n-2)/2}, \quad -1<x<1$$ ($N$ is the sample size). To test hypotheses about partial correlation coefficients, one uses the fact that the statistic

$$t=\sqrt{N-n}\frac{r}{\sqrt{1-r^2}},\quad \textrm{where}\ r = r_{12;3\dots n},$$ has, under the stated conditions, a Student distribution with $N-n$ degrees of freedom.

References

[Cr] H. Cramér, "Mathematical methods of statistics", Princeton Univ. Press (1946) MR0016588 Zbl 0063.01014
[KeSt] M.G. Kendall, A. Stuart, "The advanced theory of statistics", 2. Inference and relationship, Griffin (1979) MR0474561 MR0243648 Zbl 0416.62001
[Mu] R.J. Muirhead, "Aspects of multivariate statistical theory", Wiley (1982) MR0652932 Zbl 0556.62028
How to Cite This Entry:
Partial correlation coefficient. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Partial_correlation_coefficient&oldid=14288
This article was adapted from an original article by A.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article