diff options
Diffstat (limited to 'finale/sections')
| -rw-r--r-- | finale/sections/experiments.tex | 21 |
1 files changed, 20 insertions, 1 deletions
diff --git a/finale/sections/experiments.tex b/finale/sections/experiments.tex index c9cf762..14c83f6 100644 --- a/finale/sections/experiments.tex +++ b/finale/sections/experiments.tex @@ -1,7 +1,26 @@ -implementation: PyMC (scalability), blocks +In this section, we apply the framework from Section~\ref{sec:bayes} +and~\ref{sec:active} on synthetic graphs and cascades to validate the Bayesian +approach as well as the effectiveness of the Active Learning heuristics. + +We started with using the library PyMC to sample from the posterior distribution +directly. This method was shown to scale poorly with the number of nodes in the +graph, such that graphs of size $\geq 100$ could not be reasonably be learned +quickly. In Section~\ref{sec:appendix}, we show the progressive convergence of +the posterior around the true values of the edge weights of the graph for a +graph of size $4$. + +In order to show the effect of the active learning policies, we needed to scale +the experiments to graphs of size $\geq 1000$, which required the use of the +variational inference procedure. A graph of size $1000$ has $1M$ parameters to +be learned ($2M$ in the product-prior in Eq.~\ref{eq:gaussianprior}). The +maximum-likelihood estimator converges to an $l_\infty$-error of $.05$ for most +graphs after having observed at least $100M$ distinct cascade-steps. + baseline +fair comparison of online learning + graphs/datasets bullshit |
