aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections/experiments.tex
diff options
context:
space:
mode:
authorjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-02-05 14:10:31 -0500
committerjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-02-05 14:10:31 -0500
commit973feaebd723fb337582492de2e318de7e18fde1 (patch)
tree3463714a9ff4c20ee3e6da42f34d145d68047c11 /paper/sections/experiments.tex
parentb499eb1348d0b59ea25977ad2a7c08d6dcb2848a (diff)
downloadcascades-973feaebd723fb337582492de2e318de7e18fde1.tar.gz
adding figure
Diffstat (limited to 'paper/sections/experiments.tex')
-rw-r--r--paper/sections/experiments.tex5
1 files changed, 3 insertions, 2 deletions
diff --git a/paper/sections/experiments.tex b/paper/sections/experiments.tex
index a4fd1e9..3891526 100644
--- a/paper/sections/experiments.tex
+++ b/paper/sections/experiments.tex
@@ -13,7 +13,8 @@
& \includegraphics[scale=.23]{figures/kronecker_l2_norm_nonsparse.pdf}\\
(a) Barabasi-Albert & (b) Watts-Strogatz & (c) sparse Kronecker & (d) non-sparse Kronecker
\end{tabular}
-\captionof{figure}{Figures (a) and (b) report the $f1$-score in $\log$ scale for 2 graphs: (a) Barabasi-Albert graph, $300$ nodes, $16200$ edges. (b) Watts-Strogatz graph, $300$ nodes, $4500$ edges. Figures (c) and (d) report the $\ell2$-norm $\|\hat \Theta - \Theta\|_2$ in the exactly sparse case and the approximately sparse case for a Kronecker graph which is: (c) exactly sparse (d) non-exactly spasre}
+\captionof{figure}{Figures (a) and (b) report the $f1$-score in $\log$ scale for 2 graphs: (a) Barabasi-Albert graph, $300$ nodes, $16200$ edges. (b) Watts-Strogatz graph, $300$ nodes, $4500$ edges. Figures (c) and (d) report the $\ell2$-norm $\|\hat \Theta - \Theta\|_2$ for a Kronecker graph which is: (c) exactly sparse (d) non-exactly sparse}
+\label{fig:four_figs}
\end{table*}
In this section, we validate empirically the results and assumptions of Section~\ref{sec:results} for different initializations of parameters ($n$, $m$, $\lambda$) and for varying levels of sparsity. We compare our algorithm to two different state-of-the-art algorithms: \textsc{greedy} and \textsc{mle} from \cite{Netrapalli:2012}. As an extra benchmark, we also introduce a new algorithm \textsc{lasso}, which approximates our \textsc{sparse mle} algorithm. We find empirically that \textsc{lasso} is highly robust, and can be computed more efficiently than both \textsc{mle} and \textsc{sparse mle} without sacrificing for performance.
@@ -38,4 +39,4 @@ This algorithm, which we name \textsc{Lasso}, has the merit of being both easier
\paragraph{Quantifying robustness}
-The previous experiments only considered graphs with strong edges. To test the algorithms in the approximately sparse case, we add sparse edges to the previous graphs according to a bernoulli variable of parameter $1/3$ for every non-edge, and drawing a weight uniformly from $[0,0.1]$. The results are reported in Figure XXX by plotting the convergence of the $\ell2$-norm error, and show that both the \textsc{lasso}, followed by \textsc{sparse mle} are the most robust to noise. \ No newline at end of file
+The previous experiments only considered graphs with strong edges. To test the algorithms in the approximately sparse case, we add sparse edges to the previous graphs according to a bernoulli variable of parameter $1/3$ for every non-edge, and drawing a weight uniformly from $[0,0.1]$. The results are reported in Figure~\ref{fig:four_figs} by plotting the convergence of the $\ell2$-norm error, and show that both the \textsc{lasso}, followed by \textsc{sparse mle} are the most robust to noise. \ No newline at end of file