aboutsummaryrefslogtreecommitdiffstats
path: root/finale/sections/experiments.tex
diff options
context:
space:
mode:
authorjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-12-11 17:11:12 -0500
committerjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-12-11 17:11:12 -0500
commita08f475a5ace6d069ba4cf0c93c6ef4df2b117b0 (patch)
tree009cc3d5500765848903afa68dd4e10fe5299a6c /finale/sections/experiments.tex
parent16493ef0bb95d1faf1a00d67682bca889ed8c55c (diff)
downloadcascades-a08f475a5ace6d069ba4cf0c93c6ef4df2b117b0.tar.gz
experiments section start
Diffstat (limited to 'finale/sections/experiments.tex')
-rw-r--r--finale/sections/experiments.tex21
1 files changed, 20 insertions, 1 deletions
diff --git a/finale/sections/experiments.tex b/finale/sections/experiments.tex
index c9cf762..14c83f6 100644
--- a/finale/sections/experiments.tex
+++ b/finale/sections/experiments.tex
@@ -1,7 +1,26 @@
-implementation: PyMC (scalability), blocks
+In this section, we apply the framework from Section~\ref{sec:bayes}
+and~\ref{sec:active} on synthetic graphs and cascades to validate the Bayesian
+approach as well as the effectiveness of the Active Learning heuristics.
+
+We started with using the library PyMC to sample from the posterior distribution
+directly. This method was shown to scale poorly with the number of nodes in the
+graph, such that graphs of size $\geq 100$ could not be reasonably be learned
+quickly. In Section~\ref{sec:appendix}, we show the progressive convergence of
+the posterior around the true values of the edge weights of the graph for a
+graph of size $4$.
+
+In order to show the effect of the active learning policies, we needed to scale
+the experiments to graphs of size $\geq 1000$, which required the use of the
+variational inference procedure. A graph of size $1000$ has $1M$ parameters to
+be learned ($2M$ in the product-prior in Eq.~\ref{eq:gaussianprior}). The
+maximum-likelihood estimator converges to an $l_\infty$-error of $.05$ for most
+graphs after having observed at least $100M$ distinct cascade-steps.
+
baseline
+fair comparison of online learning
+
graphs/datasets
bullshit