aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections
diff options
context:
space:
mode:
authorThibaut Horel <thibaut.horel@gmail.com>2015-05-19 00:32:39 +0200
committerThibaut Horel <thibaut.horel@gmail.com>2015-05-19 00:32:39 +0200
commit0fd1178dcffa025772bf5bfc43f24e692596747a (patch)
tree30c634c871efe9116f291456be2f6b6587b37215 /paper/sections
parent26bfc9b9e69facabe838fc35a96263e5c487c8b3 (diff)
downloadcascades-0fd1178dcffa025772bf5bfc43f24e692596747a.tar.gz
Final compression
Diffstat (limited to 'paper/sections')
-rw-r--r--paper/sections/intro.tex5
-rw-r--r--paper/sections/model.tex1
-rw-r--r--paper/sections/results.tex12
3 files changed, 12 insertions, 6 deletions
diff --git a/paper/sections/intro.tex b/paper/sections/intro.tex
index c89c641..cc29ed7 100644
--- a/paper/sections/intro.tex
+++ b/paper/sections/intro.tex
@@ -73,21 +73,26 @@ However, the best known upper bound to this day is $\O(s^2\log
m)$~\cite{Netrapalli:2012, Daneshmand:2014}
The contributions of this paper are the following:
+\vspace{-1em}
\begin{itemize}
\item we formulate the Graph Inference problem in the context of
discrete-time influence cascades as a sparse recovery problem for
a specific type of Generalized Linear Model. This formulation notably
encompasses the well-studied Independent Cascade Model and Voter Model.
+ \vspace{-0.5em}
\item we give an algorithm which recovers the graph's edges using $\O(s\log
m)$ cascades. Furthermore, we show that our algorithm is also able to
efficiently recover the edge weights (the parameters of the influence
model) up to an additive error term,
+ \vspace{-0.5em}
\item we show that our algorithm is robust in cases where the signal to
recover is approximately $s$-sparse by proving guarantees in the
\emph{stable recovery} setting.
+ \vspace{-0.5em}
\item we provide an almost tight lower bound of $\Omega(s\log \frac{m}{s})$
observations required for sparse recovery.
\end{itemize}
+\vspace{-0.5em}
The organization of the paper is as follows: we conclude the introduction by a
survey of the related work. In Section~\ref{sec:model} we present our model of
diff --git a/paper/sections/model.tex b/paper/sections/model.tex
index 5d3a232..ecf5ad6 100644
--- a/paper/sections/model.tex
+++ b/paper/sections/model.tex
@@ -234,6 +234,7 @@ cascades, the graph inference problem becomes a linear inverse problem.
Bernoulli realization whose parameters are given by applying $f$ to the
matrix-vector product, where the measurement matrix encodes which nodes are
``contagious'' at each time step.}
+\vspace{-1em}
\end{figure}
Inferring the model parameter $\Theta$ from observed influence cascades is the
diff --git a/paper/sections/results.tex b/paper/sections/results.tex
index e91cad4..6b9fd7a 100644
--- a/paper/sections/results.tex
+++ b/paper/sections/results.tex
@@ -121,10 +121,10 @@ Assume {\bf(LF)} holds for some $\alpha>0$. For any $\delta\in(0,1)$:
\end{lemma}
The proof of Lemma~\ref{lem:ub} relies crucially on Azuma-Hoeffding's
-inequality, which allows us to handle correlated observations. This departs from
-the usual assumptions made in sparse recovery settings, where the sequence of
-measurements are assumed to be independent from one another. We now show how
-to use Theorem~\ref{thm:main} to recover the support of $\theta^*$, that is, to
+inequality, which allows us to handle correlated observations. This departs
+from the usual assumptions made in sparse recovery settings, that the
+measurements are independent from one another. We now show how to
+use Theorem~\ref{thm:main} to recover the support of $\theta^*$, that is, to
solve the Network Inference problem.
\begin{corollary}
@@ -225,7 +225,7 @@ Observe that the Hessian of $\mathcal{L}$ can be seen as a re-weighted
\bigg[x_i^{t+1}\frac{f''f-f'^2}{f^2}(\inprod{\theta^*}{x^t})\\
-(1-x_i^{t+1})\frac{f''(1-f) + f'^2}{(1-f)^2}(\inprod{\theta^*}{x^t})\bigg]
\end{multline*}
-If $f$ and $1-f$ are $c$-strictly log-convex~\cite{bagnoli2005log} for $c>0$,
+If $f$ and $(1-f)$ are $c$-strictly log-convex for $c>0$,
then $ \min\left((\log f)'', (\log (1-f))'' \right) \geq c $. This implies that
the $(S, \gamma)$-({\bf RE}) condition in Theorem~\ref{thm:main} and
Theorem~\ref{thm:approx_sparse} reduces to a condition on the \emph{Gram
@@ -267,7 +267,7 @@ cascade, which are independent, we can apply Theorem 1.8 from
\cite{rudelson:13}, lowering the number of required cascades to $s\log m \log^3(
s\log m)$.
-If $f$ and $1-f$ are strictly log-convex, then the previous observations show
+If $f$ and $(1-f)$ are strictly log-convex, then the previous observations show
that the quantity $\E[\nabla2\mathcal{L}(\theta^*)]$ in
Proposition~\ref{prop:fi} can be replaced by the expected \emph{Gram matrix}:
$A \equiv \mathbb{E}[X^T X]$. This matrix $A$ has a natural interpretation: the