Namespaces
Variants
Actions

Difference between revisions of "Janson inequality"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (AUTOMATIC EDIT (latexlist): Replaced 63 formulas out of 63 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
Line 1: Line 1:
There are a couple of inequalities referred to as  "Janson inequality" . They provide exponential upper bounds for the [[Probability|probability]] that a sum of dependent zero-one random variables (cf. also [[Random variable|Random variable]]) equals zero. The underlying [[Probability space|probability space]] corresponds to selecting a random subset <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j1300201.png" /> of a finite set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j1300202.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j1300203.png" />, in such a way that the elements are chosen independently with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j1300204.png" /> for each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j1300205.png" />.
+
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j1300206.png" /> be a family of subsets of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j1300207.png" /> and, for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j1300208.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j1300209.png" /> be equal to one if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002010.png" /> and be zero otherwise. Then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002011.png" /> counts those elements of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002012.png" /> which are entirely contained in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002013.png" />. Set
+
Out of 63 formulas, 63 were replaced by TEX code.-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002014.png" /></td> </tr></table>
+
{{TEX|semi-auto}}{{TEX|done}}
 +
There are a couple of inequalities referred to as  "Janson inequality" . They provide exponential upper bounds for the [[Probability|probability]] that a sum of dependent zero-one random variables (cf. also [[Random variable|Random variable]]) equals zero. The underlying [[Probability space|probability space]] corresponds to selecting a random subset $\Gamma _ { \mathbf{p} }$ of a finite set $\Gamma$, where $\mathbf{p} = \{ p _ { i } : i \in \Gamma \}$, in such a way that the elements are chosen independently with $\mathsf{P} ( i \in \Gamma _ { \mathbf{p} } ) = p _ { i }$ for each $ i  \in \Gamma$.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002015.png" /></td> </tr></table>
+
Let $\mathcal{S}$ be a family of subsets of $\Gamma$ and, for every $A \in \mathcal S$, let $I _ { A }$ be equal to one if $A \subseteq \Gamma _ { p }$ and be zero otherwise. Then $X = \sum _ { A \in \mathcal S } I _ { A }$ counts those elements of $\mathcal{S}$ which are entirely contained in $\Gamma _ { \mathbf{p} }$. Set
 +
 
 +
\begin{equation*} \lambda = \mathsf{E} ( X ), \end{equation*}
 +
 
 +
\begin{equation*} \Delta = \frac { 1 } { 2 } \sum _ { A \neq B , A \bigcap B \neq \emptyset } \mathsf{E} ( I _ { A } I _ { B } ) , \overline { \Delta } = \lambda + 2 \Delta. \end{equation*}
  
 
Then
 
Then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002016.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
\begin{equation} \tag{a1} \mathsf{P} ( X = 0 ) \leq \operatorname { exp } ( - \lambda + \Delta ) \end{equation}
  
and, which is better for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002017.png" />,
+
and, which is better for $\Delta &gt; \lambda / 2$,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002018.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
\begin{equation} \tag{a2} \mathsf{P} ( X = 0 ) \leq \operatorname { exp } \left( - \frac { \lambda ^ { 2 } } { \overline{\Delta} } \right). \end{equation}
  
 
Research leading to these inequalities was motivated by a ground-breaking proof of B. Bollobás [[#References|[a2]]], who, in order to estimate the chromatic number of a random graph, used martingales (cf. also [[Graph colouring|Graph colouring]]; [[Graph, random|Graph, random]]; [[Martingale|Martingale]]) to show that the probability of not containing a large clique is very small. Bollobás presented his proof at the opening day of the  "Third Conference on Random Graphs (Poznań, July 1987)" . By the end of the meeting S. Janson found a proof of inequality (a1) based on Laplace transforms (cf. also [[Laplace transform|Laplace transform]]), while T. Łuczak proved a related, less explicit estimate using martingales. The latter result was restricted to a special, though pivotal, context of small subgraphs of random graphs. Soon after, R. Boppana and J. Spencer [[#References|[a1]]] gave another proof, resembling the proof of the [[Lovász local lemma|Lovász local lemma]], of the following version of (a1):
 
Research leading to these inequalities was motivated by a ground-breaking proof of B. Bollobás [[#References|[a2]]], who, in order to estimate the chromatic number of a random graph, used martingales (cf. also [[Graph colouring|Graph colouring]]; [[Graph, random|Graph, random]]; [[Martingale|Martingale]]) to show that the probability of not containing a large clique is very small. Bollobás presented his proof at the opening day of the  "Third Conference on Random Graphs (Poznań, July 1987)" . By the end of the meeting S. Janson found a proof of inequality (a1) based on Laplace transforms (cf. also [[Laplace transform|Laplace transform]]), while T. Łuczak proved a related, less explicit estimate using martingales. The latter result was restricted to a special, though pivotal, context of small subgraphs of random graphs. Soon after, R. Boppana and J. Spencer [[#References|[a1]]] gave another proof, resembling the proof of the [[Lovász local lemma|Lovász local lemma]], of the following version of (a1):
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002019.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a3)</td></tr></table>
+
\begin{equation} \tag{a3} \mathsf{P} ( X = 0 ) \leq \operatorname { exp } \left\{ \frac { \Delta } { 1 - \epsilon } \right\} \prod _ { A } ( 1 - \mathsf{E} I _ { A } ), \end{equation}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002020.png" />. This version puts emphasis on comparison with the independent case.
+
where $\epsilon = \operatorname { max } \mathsf{E} I_A$. This version puts emphasis on comparison with the independent case.
  
Under modest symmetry conditions, inequality (a2) appeared in [[#References|[a3]]], while the general version was proved in [[#References|[a4]]]. See also [[#References|[a6]]] for another proof of (a2), but with an extra factor <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002021.png" /> in the exponent.
+
Under modest symmetry conditions, inequality (a2) appeared in [[#References|[a3]]], while the general version was proved in [[#References|[a4]]]. See also [[#References|[a6]]] for another proof of (a2), but with an extra factor $1/2$ in the exponent.
  
The quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002022.png" /> is a measure of the pairwise dependence between the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002023.png" />s. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002024.png" />, then the exponents in (a1) and (a2) are both equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002025.png" />, matching asymptotically a lower bound obtained via the [[FKG inequality|FKG inequality]], provided further <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002026.png" />.
+
The quantity $\Delta$ is a measure of the pairwise dependence between the $I _ { A }$s. If $\Delta = o ( \lambda )$, then the exponents in (a1) and (a2) are both equal to $- \mathsf{E} X ( 1 + o ( 1 ) )$, matching asymptotically a lower bound obtained via the [[FKG inequality|FKG inequality]], provided further $\max p _ { i } \rightarrow 0$.
  
 
===Example.===
 
===Example.===
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002027.png" /> be an integer, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002028.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002029.png" /> be the set of all two-element subsets of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002030.png" />. With all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002031.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002032.png" />, the random subset <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002033.png" /> is a random graph <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002034.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002035.png" /> be the family of all triples of pairs of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002036.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002037.png" />. Then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002038.png" /> is the number of triangles in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002039.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002040.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002041.png" />. Thus, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002042.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002043.png" />, while for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002044.png" />, inequality (a2) yields <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002045.png" />. As long as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002046.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002047.png" />, and the above exponential bounds strengthen the polynomial bound
+
Let $n$ be an integer, $n \geq 3$, and let $\Gamma$ be the set of all two-element subsets of $\{ 1 , \ldots , n \}$. With all $p _ { i } = p = p ( n )$, $i = 1 , \ldots , \left( \begin{array} { l } { n } \\ { 2 } \end{array} \right)$, the random subset $\Gamma _ { \mathbf{p} }$ is a random graph $\mathbf{G} ( n , p )$. Let $\mathcal{S}$ be the family of all triples of pairs of the form $\{ i j , i k , j k \}$, $1 \leq i &lt; j &lt; k \leq n$. Then $X$ is the number of triangles in $\mathbf{G} ( n , p )$, $\lambda = \left( \begin{array} { l } { n } \\ { 3 } \end{array} \right) p ^ { 3 }$ and $\Delta = \left( \begin{array} { l } { n } \\ { 4 } \end{array} \right) \left( \begin{array} { l } { 4 } \\ { 2 } \end{array} \right) p ^ { 5 }$. Thus, if $p = o ( n ^ { - 1 / 2 } )$, then $\operatorname { ln } \mathsf{P} ( X = 0 ) \sim - \lambda$, while for $p = \Omega ( n ^ { - 1 / 2 } )$, inequality (a2) yields $\mathsf{P} ( X = 0 ) \leq e ^ { - \Omega ( 1 / ( n p ^ { 2 } ) ) }$. As long as $p = o ( 1 )$, $\operatorname { var } ( X ) \sim \overline { \Delta }$, and the above exponential bounds strengthen the polynomial bound
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002048.png" /></td> </tr></table>
+
\begin{equation*} \mathsf{P} ( X = 0 ) \leq \frac { \operatorname { var } ( X ) } { \lambda } \end{equation*}
  
obtained by the method of second moments, i.e. by a corollary of the [[Chebyshev inequality|Chebyshev inequality]]. To illustrate the strength of (a1), fix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002049.png" /> and assume that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002050.png" /> is divisible by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002051.png" />. Then (a1) easily implies that with probability tending to one, more than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002052.png" /> of the vertices of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002053.png" /> are covered by vertex-disjoint triangles. Indeed, otherwise there would be a subset of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002054.png" /> vertices spanning no triangle. By (a1), the probability that this may happen is smaller than
+
obtained by the method of second moments, i.e. by a corollary of the [[Chebyshev inequality|Chebyshev inequality]]. To illustrate the strength of (a1), fix $p = 10 ^ { 5 } n ^ { - 2 / 3 }$ and assume that $n$ is divisible by $100$. Then (a1) easily implies that with probability tending to one, more than $99 \%$ of the vertices of $\mathbf{G} ( n , p )$ are covered by vertex-disjoint triangles. Indeed, otherwise there would be a subset of $n/100$ vertices spanning no triangle. By (a1), the probability that this may happen is smaller than
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002055.png" /></td> </tr></table>
+
\begin{equation*} 2 ^ { n } \operatorname { exp } \left\{ - \left( \begin{array} { c } { n / 100 } \\ { 3 } \end{array} \right) p ^ { 3 } + O ( n ^ { 4 } p ^ { 5 } ) \right\} = o ( 1 ). \end{equation*}
  
In 1990, Janson [[#References|[a4]]] generalized inequality (a2), obtaining an estimate of the lower tail of the distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002056.png" />. With <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002057.png" />, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002058.png" />,
+
In 1990, Janson [[#References|[a4]]] generalized inequality (a2), obtaining an estimate of the lower tail of the distribution of $X$. With $\phi ( x ) = ( 1 + x ) \operatorname { ln } ( 1 + x ) - x$, for $0 \leq t \leq \lambda$,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002059.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a4)</td></tr></table>
+
\begin{equation} \tag{a4} \mathsf P ( X \leq \lambda - t ) \leq \operatorname { exp } \left( - \frac { \phi ( - t / \lambda ) \lambda ^ { 2 } } { \overline { \Delta } } \right) \leq \operatorname { exp } \left( - \frac { t ^ { 2 } } { 2 \overline { \Delta } } \right). \end{equation}
  
When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002060.png" /> consists of mutually disjoint sets, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002061.png" /> is a sum of independent zero-one random variables and (a4) coincides with the (one-sided) Chernoff bound. No corresponding upper tail estimate is true in general.
+
When $\mathcal{S}$ consists of mutually disjoint sets, $X$ is a sum of independent zero-one random variables and (a4) coincides with the (one-sided) Chernoff bound. No corresponding upper tail estimate is true in general.
  
Also in 1990, W.C.S. Suen [[#References|[a7]]] proved a related inequality, which remains valid for a general probability space (and not only for random subsets). In his setting, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002062.png" /> is a sum of arbitrary indicators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/j/j130/j130020/j13002063.png" />, and the bound hinges on the dependency graph induced by them. Suen's inequality has been revived and extended in [[#References|[a5]]].
+
Also in 1990, W.C.S. Suen [[#References|[a7]]] proved a related inequality, which remains valid for a general probability space (and not only for random subsets). In his setting, $X$ is a sum of arbitrary indicators $I _ { \alpha }$, and the bound hinges on the dependency graph induced by them. Suen's inequality has been revived and extended in [[#References|[a5]]].
  
 
Fore more details on this subject see [[#References|[a8]]].
 
Fore more details on this subject see [[#References|[a8]]].
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  R. Boppana,  J. Spencer,  "A useful elementary correlation inequality"  ''J. Combin. Th. A'' , '''50'''  (1989)  pp. 305–307</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  B. Bollobás,  "The chromatic number of random graphs"  ''Combinatorica'' , '''8'''  (1988)  pp. 49–56</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  S. Janson,  T. Łuczak,  A. Ruciński,  "An exponential bound for the probability of nonexistence of a specified subgraph in a random graph"  M. Karoński (ed.)  J. Jaworski (ed.)  A. Ruciński (ed.) , ''Random Graphs '87'' , Wiley  (1990)  pp. 73–87</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  S. Janson,  "Poisson approximation for large deviations"  ''Random Struct. Algor.'' , '''1'''  (1990)  pp. 221–230</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  S. Janson,  "New versions of Suen's correlation inequality"  ''Random Struct. Algor.'' , '''13'''  (1998)  pp. 467–483</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  J. Spencer,  "Threshold functions for extension statements"  ''J. Combin. Th. A'' , '''53'''  (1990)  pp. 286–305</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  W.C.S. Suen,  "A correlation inequality and a Poisson limit theorem for nonoverlapping balanced subgraphs of a random graph"  ''Random Struct. Algor.'' , '''1'''  (1990)  pp. 231–242</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top">  S. Janson,  T. Łuczak,  A. Ruciński,  "Random graphs" , Wiley  (2000)</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  R. Boppana,  J. Spencer,  "A useful elementary correlation inequality"  ''J. Combin. Th. A'' , '''50'''  (1989)  pp. 305–307</td></tr><tr><td valign="top">[a2]</td> <td valign="top">  B. Bollobás,  "The chromatic number of random graphs"  ''Combinatorica'' , '''8'''  (1988)  pp. 49–56</td></tr><tr><td valign="top">[a3]</td> <td valign="top">  S. Janson,  T. Łuczak,  A. Ruciński,  "An exponential bound for the probability of nonexistence of a specified subgraph in a random graph"  M. Karoński (ed.)  J. Jaworski (ed.)  A. Ruciński (ed.) , ''Random Graphs '87'' , Wiley  (1990)  pp. 73–87</td></tr><tr><td valign="top">[a4]</td> <td valign="top">  S. Janson,  "Poisson approximation for large deviations"  ''Random Struct. Algor.'' , '''1'''  (1990)  pp. 221–230</td></tr><tr><td valign="top">[a5]</td> <td valign="top">  S. Janson,  "New versions of Suen's correlation inequality"  ''Random Struct. Algor.'' , '''13'''  (1998)  pp. 467–483</td></tr><tr><td valign="top">[a6]</td> <td valign="top">  J. Spencer,  "Threshold functions for extension statements"  ''J. Combin. Th. A'' , '''53'''  (1990)  pp. 286–305</td></tr><tr><td valign="top">[a7]</td> <td valign="top">  W.C.S. Suen,  "A correlation inequality and a Poisson limit theorem for nonoverlapping balanced subgraphs of a random graph"  ''Random Struct. Algor.'' , '''1'''  (1990)  pp. 231–242</td></tr><tr><td valign="top">[a8]</td> <td valign="top">  S. Janson,  T. Łuczak,  A. Ruciński,  "Random graphs" , Wiley  (2000)</td></tr></table>

Revision as of 15:31, 1 July 2020

There are a couple of inequalities referred to as "Janson inequality" . They provide exponential upper bounds for the probability that a sum of dependent zero-one random variables (cf. also Random variable) equals zero. The underlying probability space corresponds to selecting a random subset $\Gamma _ { \mathbf{p} }$ of a finite set $\Gamma$, where $\mathbf{p} = \{ p _ { i } : i \in \Gamma \}$, in such a way that the elements are chosen independently with $\mathsf{P} ( i \in \Gamma _ { \mathbf{p} } ) = p _ { i }$ for each $ i \in \Gamma$.

Let $\mathcal{S}$ be a family of subsets of $\Gamma$ and, for every $A \in \mathcal S$, let $I _ { A }$ be equal to one if $A \subseteq \Gamma _ { p }$ and be zero otherwise. Then $X = \sum _ { A \in \mathcal S } I _ { A }$ counts those elements of $\mathcal{S}$ which are entirely contained in $\Gamma _ { \mathbf{p} }$. Set

\begin{equation*} \lambda = \mathsf{E} ( X ), \end{equation*}

\begin{equation*} \Delta = \frac { 1 } { 2 } \sum _ { A \neq B , A \bigcap B \neq \emptyset } \mathsf{E} ( I _ { A } I _ { B } ) , \overline { \Delta } = \lambda + 2 \Delta. \end{equation*}

Then

\begin{equation} \tag{a1} \mathsf{P} ( X = 0 ) \leq \operatorname { exp } ( - \lambda + \Delta ) \end{equation}

and, which is better for $\Delta > \lambda / 2$,

\begin{equation} \tag{a2} \mathsf{P} ( X = 0 ) \leq \operatorname { exp } \left( - \frac { \lambda ^ { 2 } } { \overline{\Delta} } \right). \end{equation}

Research leading to these inequalities was motivated by a ground-breaking proof of B. Bollobás [a2], who, in order to estimate the chromatic number of a random graph, used martingales (cf. also Graph colouring; Graph, random; Martingale) to show that the probability of not containing a large clique is very small. Bollobás presented his proof at the opening day of the "Third Conference on Random Graphs (Poznań, July 1987)" . By the end of the meeting S. Janson found a proof of inequality (a1) based on Laplace transforms (cf. also Laplace transform), while T. Łuczak proved a related, less explicit estimate using martingales. The latter result was restricted to a special, though pivotal, context of small subgraphs of random graphs. Soon after, R. Boppana and J. Spencer [a1] gave another proof, resembling the proof of the Lovász local lemma, of the following version of (a1):

\begin{equation} \tag{a3} \mathsf{P} ( X = 0 ) \leq \operatorname { exp } \left\{ \frac { \Delta } { 1 - \epsilon } \right\} \prod _ { A } ( 1 - \mathsf{E} I _ { A } ), \end{equation}

where $\epsilon = \operatorname { max } \mathsf{E} I_A$. This version puts emphasis on comparison with the independent case.

Under modest symmetry conditions, inequality (a2) appeared in [a3], while the general version was proved in [a4]. See also [a6] for another proof of (a2), but with an extra factor $1/2$ in the exponent.

The quantity $\Delta$ is a measure of the pairwise dependence between the $I _ { A }$s. If $\Delta = o ( \lambda )$, then the exponents in (a1) and (a2) are both equal to $- \mathsf{E} X ( 1 + o ( 1 ) )$, matching asymptotically a lower bound obtained via the FKG inequality, provided further $\max p _ { i } \rightarrow 0$.

Example.

Let $n$ be an integer, $n \geq 3$, and let $\Gamma$ be the set of all two-element subsets of $\{ 1 , \ldots , n \}$. With all $p _ { i } = p = p ( n )$, $i = 1 , \ldots , \left( \begin{array} { l } { n } \\ { 2 } \end{array} \right)$, the random subset $\Gamma _ { \mathbf{p} }$ is a random graph $\mathbf{G} ( n , p )$. Let $\mathcal{S}$ be the family of all triples of pairs of the form $\{ i j , i k , j k \}$, $1 \leq i < j < k \leq n$. Then $X$ is the number of triangles in $\mathbf{G} ( n , p )$, $\lambda = \left( \begin{array} { l } { n } \\ { 3 } \end{array} \right) p ^ { 3 }$ and $\Delta = \left( \begin{array} { l } { n } \\ { 4 } \end{array} \right) \left( \begin{array} { l } { 4 } \\ { 2 } \end{array} \right) p ^ { 5 }$. Thus, if $p = o ( n ^ { - 1 / 2 } )$, then $\operatorname { ln } \mathsf{P} ( X = 0 ) \sim - \lambda$, while for $p = \Omega ( n ^ { - 1 / 2 } )$, inequality (a2) yields $\mathsf{P} ( X = 0 ) \leq e ^ { - \Omega ( 1 / ( n p ^ { 2 } ) ) }$. As long as $p = o ( 1 )$, $\operatorname { var } ( X ) \sim \overline { \Delta }$, and the above exponential bounds strengthen the polynomial bound

\begin{equation*} \mathsf{P} ( X = 0 ) \leq \frac { \operatorname { var } ( X ) } { \lambda } \end{equation*}

obtained by the method of second moments, i.e. by a corollary of the Chebyshev inequality. To illustrate the strength of (a1), fix $p = 10 ^ { 5 } n ^ { - 2 / 3 }$ and assume that $n$ is divisible by $100$. Then (a1) easily implies that with probability tending to one, more than $99 \%$ of the vertices of $\mathbf{G} ( n , p )$ are covered by vertex-disjoint triangles. Indeed, otherwise there would be a subset of $n/100$ vertices spanning no triangle. By (a1), the probability that this may happen is smaller than

\begin{equation*} 2 ^ { n } \operatorname { exp } \left\{ - \left( \begin{array} { c } { n / 100 } \\ { 3 } \end{array} \right) p ^ { 3 } + O ( n ^ { 4 } p ^ { 5 } ) \right\} = o ( 1 ). \end{equation*}

In 1990, Janson [a4] generalized inequality (a2), obtaining an estimate of the lower tail of the distribution of $X$. With $\phi ( x ) = ( 1 + x ) \operatorname { ln } ( 1 + x ) - x$, for $0 \leq t \leq \lambda$,

\begin{equation} \tag{a4} \mathsf P ( X \leq \lambda - t ) \leq \operatorname { exp } \left( - \frac { \phi ( - t / \lambda ) \lambda ^ { 2 } } { \overline { \Delta } } \right) \leq \operatorname { exp } \left( - \frac { t ^ { 2 } } { 2 \overline { \Delta } } \right). \end{equation}

When $\mathcal{S}$ consists of mutually disjoint sets, $X$ is a sum of independent zero-one random variables and (a4) coincides with the (one-sided) Chernoff bound. No corresponding upper tail estimate is true in general.

Also in 1990, W.C.S. Suen [a7] proved a related inequality, which remains valid for a general probability space (and not only for random subsets). In his setting, $X$ is a sum of arbitrary indicators $I _ { \alpha }$, and the bound hinges on the dependency graph induced by them. Suen's inequality has been revived and extended in [a5].

Fore more details on this subject see [a8].

References

[a1] R. Boppana, J. Spencer, "A useful elementary correlation inequality" J. Combin. Th. A , 50 (1989) pp. 305–307
[a2] B. Bollobás, "The chromatic number of random graphs" Combinatorica , 8 (1988) pp. 49–56
[a3] S. Janson, T. Łuczak, A. Ruciński, "An exponential bound for the probability of nonexistence of a specified subgraph in a random graph" M. Karoński (ed.) J. Jaworski (ed.) A. Ruciński (ed.) , Random Graphs '87 , Wiley (1990) pp. 73–87
[a4] S. Janson, "Poisson approximation for large deviations" Random Struct. Algor. , 1 (1990) pp. 221–230
[a5] S. Janson, "New versions of Suen's correlation inequality" Random Struct. Algor. , 13 (1998) pp. 467–483
[a6] J. Spencer, "Threshold functions for extension statements" J. Combin. Th. A , 53 (1990) pp. 286–305
[a7] W.C.S. Suen, "A correlation inequality and a Poisson limit theorem for nonoverlapping balanced subgraphs of a random graph" Random Struct. Algor. , 1 (1990) pp. 231–242
[a8] S. Janson, T. Łuczak, A. Ruciński, "Random graphs" , Wiley (2000)
How to Cite This Entry:
Janson inequality. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Janson_inequality&oldid=16002
This article was adapted from an original article by A. Ruciński (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article