aboutsummaryrefslogtreecommitdiffstats
path: root/paper
diff options
context:
space:
mode:
Diffstat (limited to 'paper')
-rw-r--r--paper/figures/kronecker_l2_norm.pdfbin0 -> 30708 bytes
-rw-r--r--paper/figures/kronecker_l2_norm_nonsparse.pdfbin0 -> 30729 bytes
-rw-r--r--paper/figures/watts_strogatz.pdfbin13567 -> 30785 bytes
-rw-r--r--paper/paper.tex2
-rw-r--r--paper/sections/experiments.tex19
5 files changed, 12 insertions, 9 deletions
diff --git a/paper/figures/kronecker_l2_norm.pdf b/paper/figures/kronecker_l2_norm.pdf
new file mode 100644
index 0000000..5177233
--- /dev/null
+++ b/paper/figures/kronecker_l2_norm.pdf
Binary files differ
diff --git a/paper/figures/kronecker_l2_norm_nonsparse.pdf b/paper/figures/kronecker_l2_norm_nonsparse.pdf
new file mode 100644
index 0000000..18ceabf
--- /dev/null
+++ b/paper/figures/kronecker_l2_norm_nonsparse.pdf
Binary files differ
diff --git a/paper/figures/watts_strogatz.pdf b/paper/figures/watts_strogatz.pdf
index 79df8d4..ddaa525 100644
--- a/paper/figures/watts_strogatz.pdf
+++ b/paper/figures/watts_strogatz.pdf
Binary files differ
diff --git a/paper/paper.tex b/paper/paper.tex
index 1201b0f..36e87be 100644
--- a/paper/paper.tex
+++ b/paper/paper.tex
@@ -41,7 +41,7 @@
% note in the first column to ``Proceedings of the...''
%\usepackage[accepted]{icml2015}
\usepackage[utf8]{inputenc}
-
+\usepackage{caption}
% The \icmltitle you define below is probably too long as a header.
% Therefore, a short form for the running title is supplied here:
\icmltitlerunning{Sparse Recovery for Graph Inference}
diff --git a/paper/sections/experiments.tex b/paper/sections/experiments.tex
index 600cac3..ccd82ce 100644
--- a/paper/sections/experiments.tex
+++ b/paper/sections/experiments.tex
@@ -3,15 +3,18 @@
\caption{Precision-Recall curve Holme-Kim Model. 200 nodes, 16200 edges.}
\end{figure}
-\begin{figure}
-\includegraphics[scale=.4]{figures/watts_strogatz.pdf}
-\caption{Watts-Strogatz Model. 200 nodes, 20000 edges.}
-\end{figure}
+\begin{table*}[t]
+\centering
+\begin{tabular}{c c c c}
-\begin{figure}
-\includegraphics[scale=.4]{figures/barabasi_albert.pdf}
-\caption{Barabasi Model.}
-\end{figure}
+\includegraphics[scale=.21]{figures/barabasi_albert.pdf}
+& \includegraphics[scale=.21]{figures/watts_strogatz.pdf}
+& \includegraphics[scale=.23]{figures/kronecker_l2_norm.pdf}
+& \includegraphics[scale=.23]{figures/kronecker_l2_norm_nonsparse.pdf}\\
+(a) & (b) & (c) & (d)
+\end{tabular}
+\captionof{figure}{blabla}
+\end{table*}
In this section, we validate empirically the results and assumptions of Section~\ref{sec:results} for different initializations of parameters ($n$, $m$, $\lambda$) and for varying levels of sparsity. We compare our algorithm to two different state-of-the-art algorithms: \textsc{greedy} and \textsc{mle} from \cite{Netrapalli:2012}. As an extra benchmark, we also introduce a new algorithm \textsc{lasso}, which approximates our \textsc{sparse mle} algorithm. We find empirically that \textsc{lasso} is highly robust, and can be computed more efficiently than both \textsc{mle} and \textsc{sparse mle} without sacrificing for performance.