aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-02-05 11:05:15 -0500
committerjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-02-05 11:05:15 -0500
commit661fca9e50e29f072b7e592b82462c744c16355a (patch)
tree1c84bb9c8d4be2cc3b718b1491255e83c718f725
parent04801a715ac1f72c0fe21dea475674783c92a906 (diff)
downloadcascades-661fca9e50e29f072b7e592b82462c744c16355a.tar.gz
small changes
-rw-r--r--paper/sections/results.tex8
1 files changed, 5 insertions, 3 deletions
diff --git a/paper/sections/results.tex b/paper/sections/results.tex
index 805f6b1..46521ed 100644
--- a/paper/sections/results.tex
+++ b/paper/sections/results.tex
@@ -35,7 +35,7 @@ write $S\defeq\text{supp}(\theta^*)$ and $s=|S|$. Recall, that $\theta_i$ is the
\end{equation}
\end{definition}
-This condition, well-known in the sparse recovery literature, captures the idea that the binary vectors of active nodes are not colinear with each other, an essentially necessary condition for recovering $\theta$. In fact, the $(S,\gamma)$-{\bf(RE)} assumption is closely linked to the \emph{Fisher Information Matrix}, which captures the amount of information carried by an observable random variable (here the set of active nodes) about an unknown parameter $\theta$ on which the random variable depends.
+This condition, well-known in the sparse recovery literature, captures the idea that the binary vectors of active nodes are not \emph{too} colinear with each other, an essentially necessary condition for recovering $\theta$. In fact, the $(S,\gamma)$-{\bf(RE)} assumption is closely linked to the \emph{Fisher Information Matrix}, which captures the amount of information carried by an observable random variable (here the set of active nodes) about an unknown parameter $\theta$ on which the random variable depends.
A discussion of the $(S,\gamma)$-{\bf(RE)} assumption in the context of generalized linear cascade models can be found in Section~\ref{sec:re}. We will also need the following assumption on the inverse link function $f$ of the generalized linear cascade model:
\begin{equation}
\tag{LF}
@@ -50,7 +50,7 @@ In the voter model, $\frac{f'}{f}(z) = \frac{1}{z}$ and $\frac{f'}{f}(1-z)
= \frac{1}{1-z}$. Hence (LF) will hold as soon as $\alpha\leq \Theta_{i,j}\leq
1-\alpha$ for all $(i,j)\in E$. Similarly, in the Independent Cascade Model,
$\frac{f'}{f}(z) = \frac{1}{1-e^z}$ and $\frac{f'}{1-f}(z) = -1$ and (LF) holds
-if $p_{i, j}\leq 1-\alpha$ for all $(i, j)\in E$. Remember that in this case we
+if $p_{i, j}\geq 1-\alpha$ for all $(i, j)\in E$. Remember that in this case we
have $\Theta_{i,j} = \log(1-p_{i,j})$. These assumptions are reasonable: if an edge has a weight very close to 0, then the ``infection'' will never happen along that edge for our set of observations and we can never hope to recover it.
\begin{theorem}
@@ -69,6 +69,8 @@ of \eqref{eq:pre-mle} with $\lambda \defeq 2\sqrt{\frac{\log m}{\alpha n^{1
\end{equation}
\end{theorem}
+Theorem~\ref{thm:main} states that, under the two previous assumptions, the decay rate of the error term is ${\cal O}(1/\sqrt{n})$, which is believed to be optimal. Note how we have expressed the bound in the number of measurements $n$, which is different from the number of cascades! For example, in the case of the voter model with horizon time $T$ and for $N$ cascades, we can expect $N\times T$ measurements.
+
Before moving to the proof of Theorem~\ref{thm:main}, note that interpreting it
in the case of the Independent Cascade Model requires one more step. Indeed, to
cast it as a generalized linear cascade model, we had to perform the
@@ -162,7 +164,7 @@ Choosing $\lambda\defeq 2\sqrt{\frac{\log m}{\alpha n^{1-\delta}}}$ concludes th
proof.
\end{proof}
-We now show how to use Theorem~\ref{thm:main} to recover the support of
+Note how the proof of Lemma 3 relied crucially on Azuma-Hoeffding's inequality: by supposing ... We now show how to use Theorem~\ref{thm:main} to recover the support of
$\theta^*$, that is, to solve the Graph Inference problem.
\begin{corollary}