Namespaces
Variants
Actions

Difference between revisions of "Discriminant informant"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(TeX)
Line 1: Line 1:
A term in [[Discriminant analysis|discriminant analysis]] denoting a variable used to establish a rule for assigning an object with measurements <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d0332401.png" />, drawn from a mixture of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d0332402.png" /> sets with distribution densities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d0332403.png" /> and a priori probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d0332404.png" />, to one of these sets. The <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d0332405.png" />-th discriminant informant of the object with measurement <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d0332406.png" /> is defined as
+
{{TEX|done}}
 +
A term in [[Discriminant analysis|discriminant analysis]] denoting a variable used to establish a rule for assigning an object with measurements $x=(x_1,\ldots,x_p)$, drawn from a mixture of $k$ sets with distribution densities $p_1(x),\ldots,p_k(x)$ and a priori probabilities $q_1,\ldots,q_k$, to one of these sets. The $i$-th discriminant informant of the object with measurement $x$ is defined as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d0332407.png" /></td> </tr></table>
+
$$S_i=-[q_1p_1(x)r_1i+\ldots+q_kp_k(x)r_{ki}],\quad i=1,\ldots,k,$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d0332408.png" /> is the loss due to assigning an element from the distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d0332409.png" /> to the distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d03324010.png" />. The rule for assigning an object to the distribution with the largest discriminant informant has minimum mathematical expectation of the loss. In particular, if all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d03324011.png" /> distributions are normal and have identical covariance matrices, all discriminant informants are linear. Then, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d03324012.png" />, the difference <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d033/d033240/d03324013.png" /> is Fisher's linear [[Discriminant function|discriminant function]].
+
where $r_{ij}$ is the loss due to assigning an element from the distribution $i$ to the distribution $j$. The rule for assigning an object to the distribution with the largest discriminant informant has minimum mathematical expectation of the loss. In particular, if all $k$ distributions are normal and have identical covariance matrices, all discriminant informants are linear. Then, if $k=2$, the difference $S_1-S_2$ is Fisher's linear [[Discriminant function|discriminant function]].
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  C.R. Rao,  "Linear statistical inference and its applications" , Wiley  (1965)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  C.R. Rao,  "Linear statistical inference and its applications" , Wiley  (1965)</TD></TR></table>

Revision as of 11:34, 26 July 2014

A term in discriminant analysis denoting a variable used to establish a rule for assigning an object with measurements $x=(x_1,\ldots,x_p)$, drawn from a mixture of $k$ sets with distribution densities $p_1(x),\ldots,p_k(x)$ and a priori probabilities $q_1,\ldots,q_k$, to one of these sets. The $i$-th discriminant informant of the object with measurement $x$ is defined as

$$S_i=-[q_1p_1(x)r_1i+\ldots+q_kp_k(x)r_{ki}],\quad i=1,\ldots,k,$$

where $r_{ij}$ is the loss due to assigning an element from the distribution $i$ to the distribution $j$. The rule for assigning an object to the distribution with the largest discriminant informant has minimum mathematical expectation of the loss. In particular, if all $k$ distributions are normal and have identical covariance matrices, all discriminant informants are linear. Then, if $k=2$, the difference $S_1-S_2$ is Fisher's linear discriminant function.

References

[1] C.R. Rao, "Linear statistical inference and its applications" , Wiley (1965)
How to Cite This Entry:
Discriminant informant. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Discriminant_informant&oldid=13021
This article was adapted from an original article by N.M. MitrofanovaA.P. Khusu (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article