Namespaces
Variants
Actions

Difference between revisions of "Mahalanobis distance"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (tex encoded by computer)
(latex details)
 
Line 23: Line 23:
  
 
$$  
 
$$  
\rho ( \mu _ {1} , \mu _ {2}  \mid  \Sigma  ^ {-} 1 )
+
\rho ( \mu _ {1} , \mu _ {2}  \mid  \Sigma  ^ {-1} )
 
$$
 
$$
  
Line 29: Line 29:
 
and  $  \mu _ {2} $
 
and  $  \mu _ {2} $
 
and common covariance matrix  $  \Sigma $.  
 
and common covariance matrix  $  \Sigma $.  
The Mahalanobis distance between two samples (from distributions with identical covariance matrices), or between a sample and a distribution, is defined by replacing the corresponding theoretical moments by sampling moments. As an estimate of the Mahalanobis distance between two distributions one uses the Mahalanobis distance between the samples extracted from these distributions or, in the case [[#References|[5]]] where a linear discriminant function is utilized — the statistic  $  \Phi  ^ {-} 1 ( \alpha ) + \Phi  ^ {-} 1 ( \beta ) $,  
+
The Mahalanobis distance between two samples (from distributions with identical covariance matrices), or between a sample and a distribution, is defined by replacing the corresponding theoretical moments by sampling moments. As an estimate of the Mahalanobis distance between two distributions one uses the Mahalanobis distance between the samples extracted from these distributions or, in the case [[#References|[5]]] where a linear discriminant function is utilized — the statistic  $  \Phi  ^ {-1} ( \alpha ) + \Phi  ^ {-1} ( \beta ) $,  
 
where  $  \alpha $
 
where  $  \alpha $
 
and  $  \beta $
 
and  $  \beta $

Latest revision as of 20:32, 16 January 2024


The quantity

$$ \rho ( X, Y \mid A) = \{ ( X- Y) ^ {T} A( X- Y) \} ^ {1/2} , $$

where $ X, Y $ are vectors and $ A $ is a matrix (and $ {} ^ {T} $ denotes transposition). The Mahalanobis distance is used in multi-dimensional statistical analysis; in particular, for testing hypotheses and the classification of observations. It was introduced by P. Mahalanobis [1], who used the quantity

$$ \rho ( \mu _ {1} , \mu _ {2} \mid \Sigma ^ {-1} ) $$

as a distance between two normal distributions with expectations $ \mu _ {1} $ and $ \mu _ {2} $ and common covariance matrix $ \Sigma $. The Mahalanobis distance between two samples (from distributions with identical covariance matrices), or between a sample and a distribution, is defined by replacing the corresponding theoretical moments by sampling moments. As an estimate of the Mahalanobis distance between two distributions one uses the Mahalanobis distance between the samples extracted from these distributions or, in the case [5] where a linear discriminant function is utilized — the statistic $ \Phi ^ {-1} ( \alpha ) + \Phi ^ {-1} ( \beta ) $, where $ \alpha $ and $ \beta $ are the frequencies of correct classification in the first and the second collection, respectively, and $ \Phi $ is the normal distribution function with expectation 0 and variance 1.

References

[1] P. Mahalanobis, "On tests and measures of group divergence I. Theoretical formulae" J. and Proc. Asiat. Soc. of Bengal , 26 (1930) pp. 541–588
[2] P. Mahalanobis, "On the generalized distance in statistics" Proc. Nat. Inst. Sci. India (Calcutta) , 2 (1936) pp. 49–55
[3] T.W. Anderson, "Introduction to multivariate statistical analysis" , Wiley (1958)
[4] S.A. Aivazyan, Z.I. Bezhaeva, O.V. Staroverov, "Classifying multivariate observations" , Moscow (1974) (In Russian)
[5] A.I. Orlov, "On the comparison of algorithms for classifying by results observations of actual data" Dokl. Moskov. Obshch. Isp. Prirod. 1985, Otdel. Biol. (1987) pp. 79–82 (In Russian)
How to Cite This Entry:
Mahalanobis distance. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Mahalanobis_distance&oldid=47749
This article was adapted from an original article by A.I. Orlov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article