aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections/experiments.tex
diff options
context:
space:
mode:
authorThibaut Horel <thibaut.horel@gmail.com>2015-02-06 15:48:24 -0500
committerThibaut Horel <thibaut.horel@gmail.com>2015-02-06 15:48:24 -0500
commit0ff14f56819acfc7be77f9237e18417d465b2266 (patch)
tree32576d399ce36de031188e1ffff5b8e3f56b4336 /paper/sections/experiments.tex
parent724dae4487559d7e52c5ac56b9059d124b664a13 (diff)
downloadcascades-0ff14f56819acfc7be77f9237e18417d465b2266.tar.gz
Compression
Diffstat (limited to 'paper/sections/experiments.tex')
-rw-r--r--paper/sections/experiments.tex13
1 files changed, 11 insertions, 2 deletions
diff --git a/paper/sections/experiments.tex b/paper/sections/experiments.tex
index 1b72753..2369b11 100644
--- a/paper/sections/experiments.tex
+++ b/paper/sections/experiments.tex
@@ -45,10 +45,19 @@ We did not benchmark against other known algorithms (\textsc{netrate} \cite{gome
In the case of the \textsc{lasso}, \textsc{mle} and \textsc{sparse mle} algorithms, we construct the edges of $\hat {\cal G} : \cup_{j \in V} \{i : \Theta_{ij} > 0.1\}$, \emph{i.e} by thresholding. The true positives are the edges which appear both in ${\cal G}$ and $\hat {\cal G}$ and the true negatives are the edges which appear in neither. Finally, we report the F1-score$=2 \text{precision}\cdot\text{recall}/(\text{precision}+\text{recall})$, which considers the number of true edges recovered by the algorithm over the total number of edges returned by the algorithm (\emph{precision}) with the number of true edges recovered by the algorithm over the total number of edges it should have recovered (\emph{recall}).
Over all experiments, \textsc{sparse mle} achieves higher rates of precision,
-recall, and f1-score. Interestingly, both \textsc{mle} and \textsc{sparse mle} perform exceptionally well on the Watts-Strogatz graph. The recovery rate converges at
+recall, and F1-score. Interestingly, both \textsc{mle} and \textsc{sparse mle} perform exceptionally well on the Watts-Strogatz graph.
+\begin{comment}
+ The recovery rate converges at
around $5000$ cascades, which is more than $15$ times the number of nodes. By
contrast, \textsc{sparse mle} achieves a reasonable F$1$-score of $.75$ for roughly $500$ observed cascades.
+\end{comment}
\paragraph{Quantifying robustness}
-The previous experiments only considered graphs with strong edges. To test the algorithms in the approximately sparse case, we add sparse edges to the previous graphs according to a bernoulli variable of parameter $1/3$ for every non-edge, and drawing a weight uniformly from $[0,0.1]$. The results are reported in Figure~\ref{fig:four_figs} (d)-(e) by plotting the convergence of the $\ell2$-norm error, and show that both the \textsc{lasso}, followed by \textsc{sparse mle} are the most robust to noise.
+The previous experiments only considered graphs with strong edges. To test the
+algorithms in the approximately sparse case, we add sparse edges to the
+previous graphs according to a bernoulli variable of parameter $1/3$ for every
+non-edge, and drawing a weight uniformly from $[0,0.1]$. The non-sparse case is
+compared to the sparse case in Figure~\ref{fig:four_figs} (d)--(e) for the $\ell_2$
+norm showing that both the \textsc{lasso}, followed by \textsc{sparse mle} are
+the most robust to noise.