aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections/results.tex
diff options
context:
space:
mode:
Diffstat (limited to 'paper/sections/results.tex')
-rw-r--r--paper/sections/results.tex8
1 files changed, 3 insertions, 5 deletions
diff --git a/paper/sections/results.tex b/paper/sections/results.tex
index eadc0ba..ebe90bd 100644
--- a/paper/sections/results.tex
+++ b/paper/sections/results.tex
@@ -7,11 +7,11 @@ Our approach is different. Rather than trying to perform variable selection by f
\forall X \in {\cal C}, \| \Sigma X \|_2^2 \geq \gamma_n \|X\|_2^2 \qquad \ \quad \text{\bf (RE)}
\end{equation}
-We cite the following Theorem from \cite{Negahban:2009}:
+We cite the following Theorem from \cite{Negahban:2009}
\begin{theorem}
\label{thm:neghaban}
-Suppose the true vector $\theta^*$ has support S of size s and the {\bf(RE)} assumption holds for the Hessian $\nabla^2 f(\theta^*)$, then by solving Eq.~\ref{...} for $\lambda_n \geq 2 \|\nabla f(\theta^*)\|_{\infty}$ we have:
+Suppose the true vector $\theta^*$ has support S of size s and the {\bf(RE)} assumption holds for the Hessian $\nabla^2 f(\theta^*)$, then by solving Eq.~\ref{eq:mle} for $\lambda_n \geq 2 \|\nabla f(\theta^*)\|_{\infty}$ we have:
\begin{equation}
\|\hat \theta - \theta^* \|_2 \leq \frac{\sqrt{s}\lambda_n}{\gamma_n}
\end{equation}
@@ -45,11 +45,9 @@ n > \frac{4}{\gamma^2 p_{\min} \eta^2} s \log p
Then with probability $1-e^{n^\delta \log p}$, $\hat {\cal S}_\eta = {\cal S}^*_{2\eta}$, where ${\cal S}^*_{2\eta} = \{ j \in [1..p] :\theta^*_j > 2 \eta \} $. In other words, we incur no false positives (false parents) and recover all `strong' parents such that $\theta^*_j > 2 \eta$.
\end{corollary}
-Note that $n$ is the number of measurements and not the number of cascades. This is an important improvement over prior work since we expect several measurements per cascade. We claim that graph recovery is better understood as a function of $n$, the cumulative number of steps in each cascade, rather than as a function $N$, the number of individual cascades.
+Note that $n$ is the number of measurements and not the number of cascades. This is an improvement over prior work since we expect several measurements per cascade. We claim that graph recovery is better understood as a function of $n$, the cumulative number of steps in each cascade, rather than as a function $N$, the number of individual cascades.
\subsection{Linear Threshold Model}
-
-