diff options
| author | jeanpouget-abadie <jean.pougetabadie@gmail.com> | 2015-02-18 00:10:48 -0500 |
|---|---|---|
| committer | jeanpouget-abadie <jean.pougetabadie@gmail.com> | 2015-02-18 00:10:48 -0500 |
| commit | 7d219bcd69ad38af08ca6ba0d0ad025c3fd3cdbe (patch) | |
| tree | d0f8f54b0d220e40605337f29161b162a7b8c59b /paper/supplementary.tex | |
| parent | 858f4e605ddb4b7c127d10b59ce55e82efeb597c (diff) | |
| download | cascades-7d219bcd69ad38af08ca6ba0d0ad025c3fd3cdbe.tar.gz | |
change to suppl
Diffstat (limited to 'paper/supplementary.tex')
| -rw-r--r-- | paper/supplementary.tex | 30 |
1 files changed, 28 insertions, 2 deletions
diff --git a/paper/supplementary.tex b/paper/supplementary.tex index 6a51d95..541c6fb 100644 --- a/paper/supplementary.tex +++ b/paper/supplementary.tex @@ -2,6 +2,7 @@ \usepackage{amsmath, amssymb, amsthm} \date{} +\author{} \title{Supplementary Material \\ {\small Inferring Graphs from Cascades: A Sparse Recovery Framework}} @@ -9,10 +10,35 @@ \maketitle -\subsection*{Approximate Sparsity} - \subsection*{(RE) with high probability} +\subsubsection*{(RE) for the gram matrix} + +Recall the expression of the Hessian: +\begin{equation} +\nonumber + \nabla^2\mathcal{L}(\theta^*) + = \frac{1}{|\mathcal{T}|}\sum_{t\in\mathcal{T}}x^t(x^t)^T + \bigg[x_i^{t+1}\frac{f''f-f'^2}{f^2}(\theta^* \cdot x^t)\\ + -(1-x_i^{t+1})\frac{f''(1-f) + f'^2}{(1-f)^2}(\theta^* \cdot x^t)\bigg] +\end{equation} + +This can be rewritten as $X J X^T$ where $J$ is a diagonal matrix with coefficients either $g_1$ or $g_2$. Under certain regularity conditions for $f$ (such as strong log-concavity), we can consider only the gram matrix: $\Delta^T X^T J X \Delta \geq \gamma_1 \Delta^T X^T X \Delta$ + +Denote $p_i$ the probability that node $i$ is infected at the beginning of the cascade. Let $\mathbb{P}(i)$ be the probability that $i$ is infected at one point during a cascade. Let $\mathbb{P}(S)$ for any set $S \subseteq N$ be the probability that $S$ is infected (all at once) at one point during a cascade. The diagonal of the Gram Matrix can be written as: +\begin{equation} +\nonumber +c_j := \mathbb{P}(i) = p_i - \sum_{S \subseteq {\cal N}(i)} (-1)^{|S|} \mathbb{P}(S) \prod_{j \in S} w_{ji} = p_i + \sum_{j \in N(i)} w_{ji}c_j + \dots +\end{equation} +Note that we have the following bounds for $c_j$: + +\begin{align} +c_j &\leq p_i + \sum_{j \in S} w_{ji} c_j \tag{PageRank}\\ +c_j &\geq p_i + \max_{j \in S} w_{ji} c_j \tag{?} +\end{align} + +\subsection*{Relaxed (RE) with high probability} + \subsection*{Lower Bound} \end{document}
\ No newline at end of file |
