aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections/results.tex
diff options
context:
space:
mode:
Diffstat (limited to 'paper/sections/results.tex')
-rw-r--r--paper/sections/results.tex12
1 files changed, 6 insertions, 6 deletions
diff --git a/paper/sections/results.tex b/paper/sections/results.tex
index e91cad4..6b9fd7a 100644
--- a/paper/sections/results.tex
+++ b/paper/sections/results.tex
@@ -121,10 +121,10 @@ Assume {\bf(LF)} holds for some $\alpha>0$. For any $\delta\in(0,1)$:
\end{lemma}
The proof of Lemma~\ref{lem:ub} relies crucially on Azuma-Hoeffding's
-inequality, which allows us to handle correlated observations. This departs from
-the usual assumptions made in sparse recovery settings, where the sequence of
-measurements are assumed to be independent from one another. We now show how
-to use Theorem~\ref{thm:main} to recover the support of $\theta^*$, that is, to
+inequality, which allows us to handle correlated observations. This departs
+from the usual assumptions made in sparse recovery settings, that the
+measurements are independent from one another. We now show how to
+use Theorem~\ref{thm:main} to recover the support of $\theta^*$, that is, to
solve the Network Inference problem.
\begin{corollary}
@@ -225,7 +225,7 @@ Observe that the Hessian of $\mathcal{L}$ can be seen as a re-weighted
\bigg[x_i^{t+1}\frac{f''f-f'^2}{f^2}(\inprod{\theta^*}{x^t})\\
-(1-x_i^{t+1})\frac{f''(1-f) + f'^2}{(1-f)^2}(\inprod{\theta^*}{x^t})\bigg]
\end{multline*}
-If $f$ and $1-f$ are $c$-strictly log-convex~\cite{bagnoli2005log} for $c>0$,
+If $f$ and $(1-f)$ are $c$-strictly log-convex for $c>0$,
then $ \min\left((\log f)'', (\log (1-f))'' \right) \geq c $. This implies that
the $(S, \gamma)$-({\bf RE}) condition in Theorem~\ref{thm:main} and
Theorem~\ref{thm:approx_sparse} reduces to a condition on the \emph{Gram
@@ -267,7 +267,7 @@ cascade, which are independent, we can apply Theorem 1.8 from
\cite{rudelson:13}, lowering the number of required cascades to $s\log m \log^3(
s\log m)$.
-If $f$ and $1-f$ are strictly log-convex, then the previous observations show
+If $f$ and $(1-f)$ are strictly log-convex, then the previous observations show
that the quantity $\E[\nabla2\mathcal{L}(\theta^*)]$ in
Proposition~\ref{prop:fi} can be replaced by the expected \emph{Gram matrix}:
$A \equiv \mathbb{E}[X^T X]$. This matrix $A$ has a natural interpretation: the