aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections/experiments.tex
diff options
context:
space:
mode:
Diffstat (limited to 'paper/sections/experiments.tex')
-rw-r--r--paper/sections/experiments.tex15
1 files changed, 12 insertions, 3 deletions
diff --git a/paper/sections/experiments.tex b/paper/sections/experiments.tex
index 7cb852c..48ffc02 100644
--- a/paper/sections/experiments.tex
+++ b/paper/sections/experiments.tex
@@ -13,7 +13,10 @@
& \includegraphics[scale=.23]{figures/kronecker_l2_norm_nonsparse.pdf}\\
(a) Barabasi-Albert & (b) Watts-Strogatz & (c) sparse Kronecker & (d) non-sparse Kronecker
\end{tabular}
-\captionof{figure}{Figures (a) and (b) report the $f1$-score in $\log$ scale for 2 graphs: (a) Barabasi-Albert graph, $300$ nodes, $16200$ edges. (b) Watts-Strogatz graph, $300$ nodes, $4500$ edges. Figures (c) and (d) report the $\ell2$-norm $\|\hat \Theta - \Theta\|_2$ for a Kronecker graph which is: (c) exactly sparse (d) non-exactly sparse}
+\captionof{figure}{Figures (a) and (b) report the F$1$-score in $\log$ scale
+for 2 graphs: (a) Barabasi-Albert graph, $300$ nodes, $16200$ edges. (b)
+Watts-Strogatz graph, $300$ nodes, $4500$ edges. Figures (c) and (d) report the
+$\ell_2$-norm $\|\hat \Theta - \Theta\|_2$ for a Kronecker graph which is: (c) exactly sparse (d) non-exactly sparse}
\label{fig:four_figs}
\end{table*}
@@ -35,8 +38,14 @@ We did not benchmark against other known algorithms (\textsc{netrate} \cite{gome
\paragraph{Graph Estimation}
In the case of the \textsc{lasso}, \textsc{mle} and \textsc{sparse mle} algorithms, we construct the edges of $\hat {\cal G} : \cup_{j \in V} \{i : \Theta_{ij} > 0.1\}$, \emph{i.e} by thresholding. The true positives are the edges which appear both in ${\cal G}$ and $\hat {\cal G}$ and the true negatives are the edges which appear in neither. Finally, we report the F1-score$=2 \text{precision}\cdot\text{recall}/(\text{precision}+\text{recall})$, which considers the number of true edges recovered by the algorithm over the total number of edges returned by the algorithm (\emph{precision}) with the number of true edges recovered by the algorithm over the total number of edges it should have recovered (\emph{recall}).
-Over all experiments, \textsc{sparse mle} achieves higher rates of precision, recall, and f1-score. \textsc{sparse mle} is also robust to approximate sparsity, and displays a faster convergence of the $\ell2$-norm than any other benchmark. Interestingly, both \textsc{mle} and \textsc{sparse mle} perform exceptionally well on the Watts-Strogatz graph. The recovery rate converges at around $5000$ cascades, which is more than $15$ times the number of nodes. By contrast, \textsc{sparse mle} achieves a reasonable $f1$-score of $.75$ for roughly $500$ observed cascades.
+Over all experiments, \textsc{sparse mle} achieves higher rates of precision,
+recall, and f1-score. \textsc{sparse mle} is also robust to approximate
+sparsity, and displays a faster convergence of the $\ell2$-norm than any other
+benchmark. Interestingly, both \textsc{mle} and \textsc{sparse mle} perform
+exceptionally well on the Watts-Strogatz graph. The recovery rate converges at
+around $5000$ cascades, which is more than $15$ times the number of nodes. By
+contrast, \textsc{sparse mle} achieves a reasonable F$1$-score of $.75$ for roughly $500$ observed cascades.
\paragraph{Quantifying robustness}
-The previous experiments only considered graphs with strong edges. To test the algorithms in the approximately sparse case, we add sparse edges to the previous graphs according to a bernoulli variable of parameter $1/3$ for every non-edge, and drawing a weight uniformly from $[0,0.1]$. The results are reported in Figure~\ref{fig:four_figs} by plotting the convergence of the $\ell2$-norm error, and show that both the \textsc{lasso}, followed by \textsc{sparse mle} are the most robust to noise. \ No newline at end of file
+The previous experiments only considered graphs with strong edges. To test the algorithms in the approximately sparse case, we add sparse edges to the previous graphs according to a bernoulli variable of parameter $1/3$ for every non-edge, and drawing a weight uniformly from $[0,0.1]$. The results are reported in Figure~\ref{fig:four_figs} by plotting the convergence of the $\ell2$-norm error, and show that both the \textsc{lasso}, followed by \textsc{sparse mle} are the most robust to noise.