aboutsummaryrefslogtreecommitdiffstats
path: root/paper
diff options
context:
space:
mode:
authorjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-05-19 01:15:33 +0200
committerjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-05-19 01:15:33 +0200
commita13116fa67cd0811c8660d38e20500433bb7a3a3 (patch)
tree1d2cecf8acb84dc2e923200b2f0abbf21953b2c2 /paper
parent3d3e1b5804b871fa9c7bc8fa2a712c997f629c3e (diff)
downloadcascades-a13116fa67cd0811c8660d38e20500433bb7a3a3.tar.gz
fixed typos
Diffstat (limited to 'paper')
-rw-r--r--paper/sections/appendix.tex2
-rw-r--r--paper/sections/intro.tex2
-rw-r--r--paper/sections/model.tex2
-rw-r--r--paper/sections/results.tex4
4 files changed, 5 insertions, 5 deletions
diff --git a/paper/sections/appendix.tex b/paper/sections/appendix.tex
index 22b87c2..a72114d 100644
--- a/paper/sections/appendix.tex
+++ b/paper/sections/appendix.tex
@@ -159,7 +159,7 @@ convex optimization, the MLE algorithm is faster. This is due to the overhead
caused by the $\ell_1$-regularisation in~\eqref{eq:pre-mle}.
The dependency of the running time on the number of cascades increases is
-linear, as expected. The slope is largest for our algorithm, which is against
+linear, as expected. The slope is largest for our algorithm, which is again
caused by the overhead induced by the $\ell_1$-regularization.
diff --git a/paper/sections/intro.tex b/paper/sections/intro.tex
index cc29ed7..206fbf6 100644
--- a/paper/sections/intro.tex
+++ b/paper/sections/intro.tex
@@ -62,7 +62,7 @@ required number of observed cascades is $\O(poly(s)\log m)$
\cite{Netrapalli:2012, Abrahao:13}.
A more recent line of research~\cite{Daneshmand:2014} has focused on applying
-advances in sparse recovery to the graph inference problem. Indeed, the graph
+advances in sparse recovery to the network inference problem. Indeed, the graph
can be interpreted as a ``sparse signal'' measured through influence cascades
and then recovered. The challenge is that influence cascade models typically
lead to non-linear inverse problems and the measurements (the state of the
diff --git a/paper/sections/model.tex b/paper/sections/model.tex
index ecf5ad6..ec2da8b 100644
--- a/paper/sections/model.tex
+++ b/paper/sections/model.tex
@@ -253,7 +253,7 @@ problem:
\hat{\Theta} \in \argmax_{\Theta} \frac{1}{n}
\mathcal{L}(\Theta\,|\,x^1,\ldots,x^n) - \lambda\|\Theta\|_1
\end{displaymath}
-where $\lambda$ is the regularization factor which helps preventing
+where $\lambda$ is the regularization factor which helps prevent
overfitting and controls the sparsity of the solution.
The generalized linear cascade model is decomposable in the following sense:
diff --git a/paper/sections/results.tex b/paper/sections/results.tex
index 6b9fd7a..af0b076 100644
--- a/paper/sections/results.tex
+++ b/paper/sections/results.tex
@@ -30,7 +30,7 @@ by~\cite{bickel2009simultaneous}.
\begin{definition}
Let $\Sigma\in\mathcal{S}_m(\reals)$ be a real symmetric matrix and $S$ be
a subset of $\{1,\ldots,m\}$. Defining $\mathcal{C}(S)\defeq
- \{X\in\reals^m\,:\,\|X\|_1\leq 1\text{ and } \|X_{S^c}\|_1\leq
+ \{X\in\reals^m\,:\,\|X_{S^c}\|_1\leq
3\|X_S\|_1\}$. We say that $\Sigma$ satisfies the
$(S,\gamma)$-\emph{restricted eigenvalue condition} iff:
\begin{equation}
@@ -268,7 +268,7 @@ cascade, which are independent, we can apply Theorem 1.8 from
s\log m)$.
If $f$ and $(1-f)$ are strictly log-convex, then the previous observations show
-that the quantity $\E[\nabla2\mathcal{L}(\theta^*)]$ in
+that the quantity $\E[\nabla^2\mathcal{L}(\theta^*)]$ in
Proposition~\ref{prop:fi} can be replaced by the expected \emph{Gram matrix}:
$A \equiv \mathbb{E}[X^T X]$. This matrix $A$ has a natural interpretation: the
entry $a_{i,j}$ is the probability that node $i$ and node $j$ are infected at