Namespaces
Variants
Actions

Difference between revisions of "Average sample number"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (AUTOMATIC EDIT (latexlist): Replaced 59 formulas out of 59 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
Line 1: Line 1:
 +
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
 +
 +
Out of 59 formulas, 59 were replaced by TEX code.-->
 +
 +
{{TEX|semi-auto}}{{TEX|done}}
 
''ASN''
 
''ASN''
  
 
A term occurring in the area of statistics called [[Sequential analysis|sequential analysis]]. In sequential procedures for statistical estimation, hypothesis testing or decision theory, the number of observations taken (or sample size) is not pre-determined, but depends on the observations themselves. The expected value of the sample size of such a procedure is called the average sample number. This depends on the underlying distribution of the observations, which is often unknown. If the distribution is determined by the value of some parameter, then the average sample number becomes a function of that parameter.
 
A term occurring in the area of statistics called [[Sequential analysis|sequential analysis]]. In sequential procedures for statistical estimation, hypothesis testing or decision theory, the number of observations taken (or sample size) is not pre-determined, but depends on the observations themselves. The expected value of the sample size of such a procedure is called the average sample number. This depends on the underlying distribution of the observations, which is often unknown. If the distribution is determined by the value of some parameter, then the average sample number becomes a function of that parameter.
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a1303201.png" /> denotes the sample size and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a1303202.png" /> is the unknown parameter, then the average sample number is given by
+
If $N$ denotes the sample size and $\theta$ is the unknown parameter, then the average sample number is given by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a1303203.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
\begin{equation} \tag{a1} \mathsf{E} _ { \theta } ( N ) = \sum _ { n = 1 } ^ { \infty } n \mathsf{P} _ { \theta } ( N = n ) = \sum _ { n = 0 } ^ { \infty } \mathsf{P} _ { \theta } ( N &gt; n ). \end{equation}
  
Some aspects of the average sample number in the case where the observations are independent Bernoulli random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a1303204.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a1303205.png" /> are given below.
+
Some aspects of the average sample number in the case where the observations are independent Bernoulli random variables $X_i$ with $\mathsf{E} _ { \theta } ( X _ { i } ) = \mathsf{P} _ { \theta } ( X _ { i } = 1 ) = \theta = 1 - \mathsf{P} _ { \theta } ( X _ { i } = 0 )$ are given below.
  
Consider the case of the curtailed version of a test of the hypotheses <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a1303206.png" /> versus <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a1303207.png" /> that takes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a1303208.png" /> observations and decides <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a1303209.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032010.png" /> (cf. also [[Statistical hypotheses, verification of|Statistical hypotheses, verification of]]). The curtailed version of this test is sequential and stops the first time that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032011.png" /> or at time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032012.png" />, whichever comes first. Then the average sample number is given by
+
Consider the case of the curtailed version of a test of the hypotheses $H _ { 0 } : \theta = 0$ versus $H _ { 1 } : \theta &gt; 0$ that takes $n$ observations and decides $H _ { 1 }$ if $X _ { 1 } + \ldots + X _ { n } &gt; 0$ (cf. also [[Statistical hypotheses, verification of|Statistical hypotheses, verification of]]). The curtailed version of this test is sequential and stops the first time that $X _ { k } = 1$ or at time $n$, whichever comes first. Then the average sample number is given by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032013.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
\begin{equation} \tag{a2} \mathsf{E} _ { \theta } ( N ) = \sum _ { k = 0 } ^ { n - 1 } \mathsf{P} _ { \theta } ( N &gt; k ) = \sum _ { k = 0 } ^ { n - 1 } ( 1 - \theta ) ^ { k } = \end{equation}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032014.png" /></td> </tr></table>
+
\begin{equation*} = \frac { 1 - ( 1 - \theta ) ^ { n } } { \theta } \text { for } \theta &gt; 0. \end{equation*}
  
This formula is a special case of Wald's lemma (see [[#References|[a3]]] and [[Wald identity|Wald identity]]), which is very useful in finding or approximating average sample numbers. Wald's lemma states that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032015.png" /> are independent random variables with common expected value (cf. also [[Random variable|Random variable]]) and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032016.png" /> is a [[Stopping time|stopping time]] (i.e., <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032017.png" /> is determined by the observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032018.png" />) with finite expected value, then letting <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032019.png" />,
+
This formula is a special case of Wald's lemma (see [[#References|[a3]]] and [[Wald identity|Wald identity]]), which is very useful in finding or approximating average sample numbers. Wald's lemma states that if $Y , Y _ { 1 } , Y _ { 2 } , \dots$ are independent random variables with common expected value (cf. also [[Random variable|Random variable]]) and $N$ is a [[Stopping time|stopping time]] (i.e., $N = k$ is determined by the observations $Y _ { 1 } , \dots , Y _ { k }$) with finite expected value, then letting $S _ { n } = Y _ { 1 } + \ldots + Y _ { n }$,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032020.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a3)</td></tr></table>
+
\begin{equation} \tag{a3} \mathsf{E} ( Y ) \mathsf{E} ( N ) = \mathsf{E} ( S _ { N } ). \end{equation}
  
Thus, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032021.png" />, one has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032022.png" />.
+
Thus, for $\mathsf{E} ( Y ) \neq 0$, one has $\mathsf{E} ( N ) = \mathsf{E} ( S _ { N } ) ( \mathsf{E} ( Y ) ) ^ { - 1 }$.
  
In this example, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032023.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032024.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032025.png" />. The average sample number then follows from (a3) and agrees with (a2). See [[#References|[a1]]] for asymptotic properties of the average sample number for curtailed tests in general.
+
In this example, $Y _ { i } = X _ { i }$, $\mathsf{E} ( Y ) = \theta$, and $\mathsf E _ { \theta } ( S _ { N } ) = \mathsf P _ { \theta } ( S _ { N } = 1 ) = 1 - \mathsf P _ { \theta } ( S _ { n } = 0 ) = 1 - ( 1 - \theta ) ^ { n }$. The average sample number then follows from (a3) and agrees with (a2). See [[#References|[a1]]] for asymptotic properties of the average sample number for curtailed tests in general.
  
In testing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032026.png" /> versus <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032027.png" />, the logarithm of the likelihood ratio (cf. also [[Likelihood-ratio test|Likelihood-ratio test]]) after <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032028.png" /> observations is easily seen to be of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032029.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032030.png" />. Thus, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032031.png" />, the sequential probability ratio test stops the first time that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032032.png" /> and decides <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032033.png" /> or the first time that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032034.png" /> and decides <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032035.png" /> for positive integers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032036.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032037.png" />. In this case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032038.png" /> is a [[Random walk|random walk]] taking steps to the right with probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032039.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032040.png" /> in formula (a3). Thus, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032041.png" />, the average sample number is
+
In testing $H _ { 0 } : \theta = p$ versus $H _ { 1 } : \theta = q = 1 - p$, the logarithm of the likelihood ratio (cf. also [[Likelihood-ratio test|Likelihood-ratio test]]) after $n$ observations is easily seen to be of the form $S_n \operatorname { log } ( q / p )$, where $Y _ { i } = 2 X _ { i } - 1$. Thus, if $p &lt; .5$, the sequential probability ratio test stops the first time that $S _ { n } = K$ and decides $H _ { 1 }$ or the first time that $S _ { n } = - J$ and decides $H _ { 0 }$ for positive integers $J$ and $K$. In this case $S _ { 1 } , S _ { 2 } , \ldots$ is a [[Random walk|random walk]] taking steps to the right with probability $\theta$ and $\mathsf{E} ( Y ) = 2 \theta - 1$ in formula (a3). Thus, if $\theta \neq 1 / 2$, the average sample number is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032042.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a4)</td></tr></table>
+
\begin{equation} \tag{a4} \mathsf{E} _ { \theta } ( N ) = \frac { \mathsf{P} _ { \theta } ( S _ { N } = K ) K - \mathsf{P} _ { \theta } ( S _ { N } = - J ) J } { 2 \theta - 1 }. \end{equation}
  
Well-known formulas from the theory of random walks show that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032043.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032044.png" />.
+
Well-known formulas from the theory of random walks show that $\mathsf{P} _ { \theta } ( S _ { N } = K ) = ( 1 - r ^ { J } ) ( 1 - r ^ { K + J } ) ^ { - 1 }$, where $r = ( 1 - \theta ) / \theta$.
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032045.png" />, this method fails. One must then use another result of A. Wald, stating that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032046.png" />, but <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032047.png" />, then
+
If $\theta = .5$, this method fails. One must then use another result of A. Wald, stating that if $ \mathsf{E} ( Y ) = 0$, but $\mathsf{E} ( Y _ { i } ^ { 2 } ) = \sigma ^ { 2 } &lt; \infty$, then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032048.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a5)</td></tr></table>
+
\begin{equation} \tag{a5} \sigma ^ { 2 } \mathsf{E} ( N ) = \mathsf{E} ( S _ { N } ^ { 2 } ). \end{equation}
  
In this example, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032049.png" />, one has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032050.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032051.png" />. Then (a5) yields <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032052.png" />.
+
In this example, for $\theta = .5$, one has $\sigma ^ { 2 } = .25$ and $\mathsf{P} ( S _ { N } = K ) = J ( J + K ) ^ { - 1 }$. Then (a5) yields $\mathsf E ( N ) = 4 JK$.
  
In order to decide which of two sequential tests is better, it is important to be able to express the average sample number in terms of the error probabilities of the test. Then the average sample numbers of different tests with the same error probabilities can be compared. In this example, the probability of a type-I error <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032053.png" /> and the probability of a type-II error <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032054.png" /> (cf. also [[Error|Error]]). From this one sees that
+
In order to decide which of two sequential tests is better, it is important to be able to express the average sample number in terms of the error probabilities of the test. Then the average sample numbers of different tests with the same error probabilities can be compared. In this example, the probability of a type-I error $\alpha = \mathsf{P} _ { p } ( S _ { N } = K )$ and the probability of a type-II error $\beta = \mathsf{P} _ { q } ( S _ { N } = - J )$ (cf. also [[Error|Error]]). From this one sees that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032055.png" /></td> </tr></table>
+
\begin{equation*} K = \operatorname { log } \left( \frac { 1 - \beta } { \alpha } \right) \left( \operatorname { log } \frac { q } { p } \right) ^ { - 1 } \end{equation*}
  
 
and
 
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032056.png" /></td> </tr></table>
+
\begin{equation*} J = \operatorname { log } \left( \frac { 1 - \alpha } { \beta } \right) \left( \operatorname { log } \frac { q } { p } \right) ^ { - 1 }. \end{equation*}
  
In particular, it follows that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032057.png" />, then
+
In particular, it follows that if $\theta = p$, then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032058.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a6)</td></tr></table>
+
\begin{equation} \tag{a6} E_p ( N ) = \frac { \alpha \operatorname { log } ( \frac { 1 - \beta } { \alpha } ) + ( 1 - \alpha ) \operatorname { log } ( \frac { \beta } { 1 - \alpha } ) } { ( p - q ) \operatorname { log } ( q / p ) }. \end{equation}
  
This formula and the analogous one with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130320/a13032059.png" /> are often used to measure the efficiency of the sequential probability ratio test against other sequential tests with the same error probabilities
+
This formula and the analogous one with $\theta = q$ are often used to measure the efficiency of the sequential probability ratio test against other sequential tests with the same error probabilities
  
 
Similar formulas can be found for sequential probability ratio tests for sequences of independent random variables with other distributions, but in general the formulas are approximations since the tests will stop when the logarithm of the likelihood ratio first crosses the boundary rather than when it first hits the boundary. Many studies (see [[#References|[a2]]]) consider the adequacy of approximations for the average sample number of sequential probability ratio tests and some other sequential tests whose stopping times are more complicated. In areas such as signal detection, average sample numbers are also studied for sequential tests where the observations are not independent, identically distributed random variables.
 
Similar formulas can be found for sequential probability ratio tests for sequences of independent random variables with other distributions, but in general the formulas are approximations since the tests will stop when the logarithm of the likelihood ratio first crosses the boundary rather than when it first hits the boundary. Many studies (see [[#References|[a2]]]) consider the adequacy of approximations for the average sample number of sequential probability ratio tests and some other sequential tests whose stopping times are more complicated. In areas such as signal detection, average sample numbers are also studied for sequential tests where the observations are not independent, identically distributed random variables.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  B. Eisenberg,  B.K. Ghosh,  "Curtailed and uniformly most powerful sequential tests"  ''Ann. Statist.'' , '''8'''  (1980)  pp. 1123–1131</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  D. Siemund,  "Sequential analysis: Tests and confidence intervals" , Springer  (1985)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  A. Wald,  "Sequential analysis" , Wiley  (1947)</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  B. Eisenberg,  B.K. Ghosh,  "Curtailed and uniformly most powerful sequential tests"  ''Ann. Statist.'' , '''8'''  (1980)  pp. 1123–1131</td></tr><tr><td valign="top">[a2]</td> <td valign="top">  D. Siemund,  "Sequential analysis: Tests and confidence intervals" , Springer  (1985)</td></tr><tr><td valign="top">[a3]</td> <td valign="top">  A. Wald,  "Sequential analysis" , Wiley  (1947)</td></tr></table>

Revision as of 16:59, 1 July 2020

ASN

A term occurring in the area of statistics called sequential analysis. In sequential procedures for statistical estimation, hypothesis testing or decision theory, the number of observations taken (or sample size) is not pre-determined, but depends on the observations themselves. The expected value of the sample size of such a procedure is called the average sample number. This depends on the underlying distribution of the observations, which is often unknown. If the distribution is determined by the value of some parameter, then the average sample number becomes a function of that parameter.

If $N$ denotes the sample size and $\theta$ is the unknown parameter, then the average sample number is given by

\begin{equation} \tag{a1} \mathsf{E} _ { \theta } ( N ) = \sum _ { n = 1 } ^ { \infty } n \mathsf{P} _ { \theta } ( N = n ) = \sum _ { n = 0 } ^ { \infty } \mathsf{P} _ { \theta } ( N > n ). \end{equation}

Some aspects of the average sample number in the case where the observations are independent Bernoulli random variables $X_i$ with $\mathsf{E} _ { \theta } ( X _ { i } ) = \mathsf{P} _ { \theta } ( X _ { i } = 1 ) = \theta = 1 - \mathsf{P} _ { \theta } ( X _ { i } = 0 )$ are given below.

Consider the case of the curtailed version of a test of the hypotheses $H _ { 0 } : \theta = 0$ versus $H _ { 1 } : \theta > 0$ that takes $n$ observations and decides $H _ { 1 }$ if $X _ { 1 } + \ldots + X _ { n } > 0$ (cf. also Statistical hypotheses, verification of). The curtailed version of this test is sequential and stops the first time that $X _ { k } = 1$ or at time $n$, whichever comes first. Then the average sample number is given by

\begin{equation} \tag{a2} \mathsf{E} _ { \theta } ( N ) = \sum _ { k = 0 } ^ { n - 1 } \mathsf{P} _ { \theta } ( N > k ) = \sum _ { k = 0 } ^ { n - 1 } ( 1 - \theta ) ^ { k } = \end{equation}

\begin{equation*} = \frac { 1 - ( 1 - \theta ) ^ { n } } { \theta } \text { for } \theta > 0. \end{equation*}

This formula is a special case of Wald's lemma (see [a3] and Wald identity), which is very useful in finding or approximating average sample numbers. Wald's lemma states that if $Y , Y _ { 1 } , Y _ { 2 } , \dots$ are independent random variables with common expected value (cf. also Random variable) and $N$ is a stopping time (i.e., $N = k$ is determined by the observations $Y _ { 1 } , \dots , Y _ { k }$) with finite expected value, then letting $S _ { n } = Y _ { 1 } + \ldots + Y _ { n }$,

\begin{equation} \tag{a3} \mathsf{E} ( Y ) \mathsf{E} ( N ) = \mathsf{E} ( S _ { N } ). \end{equation}

Thus, for $\mathsf{E} ( Y ) \neq 0$, one has $\mathsf{E} ( N ) = \mathsf{E} ( S _ { N } ) ( \mathsf{E} ( Y ) ) ^ { - 1 }$.

In this example, $Y _ { i } = X _ { i }$, $\mathsf{E} ( Y ) = \theta$, and $\mathsf E _ { \theta } ( S _ { N } ) = \mathsf P _ { \theta } ( S _ { N } = 1 ) = 1 - \mathsf P _ { \theta } ( S _ { n } = 0 ) = 1 - ( 1 - \theta ) ^ { n }$. The average sample number then follows from (a3) and agrees with (a2). See [a1] for asymptotic properties of the average sample number for curtailed tests in general.

In testing $H _ { 0 } : \theta = p$ versus $H _ { 1 } : \theta = q = 1 - p$, the logarithm of the likelihood ratio (cf. also Likelihood-ratio test) after $n$ observations is easily seen to be of the form $S_n \operatorname { log } ( q / p )$, where $Y _ { i } = 2 X _ { i } - 1$. Thus, if $p < .5$, the sequential probability ratio test stops the first time that $S _ { n } = K$ and decides $H _ { 1 }$ or the first time that $S _ { n } = - J$ and decides $H _ { 0 }$ for positive integers $J$ and $K$. In this case $S _ { 1 } , S _ { 2 } , \ldots$ is a random walk taking steps to the right with probability $\theta$ and $\mathsf{E} ( Y ) = 2 \theta - 1$ in formula (a3). Thus, if $\theta \neq 1 / 2$, the average sample number is

\begin{equation} \tag{a4} \mathsf{E} _ { \theta } ( N ) = \frac { \mathsf{P} _ { \theta } ( S _ { N } = K ) K - \mathsf{P} _ { \theta } ( S _ { N } = - J ) J } { 2 \theta - 1 }. \end{equation}

Well-known formulas from the theory of random walks show that $\mathsf{P} _ { \theta } ( S _ { N } = K ) = ( 1 - r ^ { J } ) ( 1 - r ^ { K + J } ) ^ { - 1 }$, where $r = ( 1 - \theta ) / \theta$.

If $\theta = .5$, this method fails. One must then use another result of A. Wald, stating that if $ \mathsf{E} ( Y ) = 0$, but $\mathsf{E} ( Y _ { i } ^ { 2 } ) = \sigma ^ { 2 } < \infty$, then

\begin{equation} \tag{a5} \sigma ^ { 2 } \mathsf{E} ( N ) = \mathsf{E} ( S _ { N } ^ { 2 } ). \end{equation}

In this example, for $\theta = .5$, one has $\sigma ^ { 2 } = .25$ and $\mathsf{P} ( S _ { N } = K ) = J ( J + K ) ^ { - 1 }$. Then (a5) yields $\mathsf E ( N ) = 4 JK$.

In order to decide which of two sequential tests is better, it is important to be able to express the average sample number in terms of the error probabilities of the test. Then the average sample numbers of different tests with the same error probabilities can be compared. In this example, the probability of a type-I error $\alpha = \mathsf{P} _ { p } ( S _ { N } = K )$ and the probability of a type-II error $\beta = \mathsf{P} _ { q } ( S _ { N } = - J )$ (cf. also Error). From this one sees that

\begin{equation*} K = \operatorname { log } \left( \frac { 1 - \beta } { \alpha } \right) \left( \operatorname { log } \frac { q } { p } \right) ^ { - 1 } \end{equation*}

and

\begin{equation*} J = \operatorname { log } \left( \frac { 1 - \alpha } { \beta } \right) \left( \operatorname { log } \frac { q } { p } \right) ^ { - 1 }. \end{equation*}

In particular, it follows that if $\theta = p$, then

\begin{equation} \tag{a6} E_p ( N ) = \frac { \alpha \operatorname { log } ( \frac { 1 - \beta } { \alpha } ) + ( 1 - \alpha ) \operatorname { log } ( \frac { \beta } { 1 - \alpha } ) } { ( p - q ) \operatorname { log } ( q / p ) }. \end{equation}

This formula and the analogous one with $\theta = q$ are often used to measure the efficiency of the sequential probability ratio test against other sequential tests with the same error probabilities

Similar formulas can be found for sequential probability ratio tests for sequences of independent random variables with other distributions, but in general the formulas are approximations since the tests will stop when the logarithm of the likelihood ratio first crosses the boundary rather than when it first hits the boundary. Many studies (see [a2]) consider the adequacy of approximations for the average sample number of sequential probability ratio tests and some other sequential tests whose stopping times are more complicated. In areas such as signal detection, average sample numbers are also studied for sequential tests where the observations are not independent, identically distributed random variables.

References

[a1] B. Eisenberg, B.K. Ghosh, "Curtailed and uniformly most powerful sequential tests" Ann. Statist. , 8 (1980) pp. 1123–1131
[a2] D. Siemund, "Sequential analysis: Tests and confidence intervals" , Springer (1985)
[a3] A. Wald, "Sequential analysis" , Wiley (1947)
How to Cite This Entry:
Average sample number. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Average_sample_number&oldid=16548
This article was adapted from an original article by Bennett Eisenberg (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article