1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
|
In this section, we apply the framework from Section~\ref{sec:bayes}
and~\ref{sec:active} on synthetic graphs and cascades to validate the Bayesian
approach as well as the effectiveness of the Active Learning heuristics.
We started by using the PyMC library \cite{pymc} which implements MCMC
algorithms to sample from the posterior distribution directly. We found this
method to scale poorly with the number of nodes in the graph, such that graphs
of size $\geq 100$ could not be reasonably be learned in the amount of time we
had. In the appendix, we show the progressive convergence of the posterior
around the true values of the edge weights of the graph for a graph of size
$4$.
\paragraph{VI implementation.} In order to show the effect of the active
learning policies, we needed to scale the experiments to graphs of size $\geq
1000$. A graph of size $1000$ has $1M$ parameters to be learned ($2M$ in the
product-prior of Eq.~\ref{eq:gaussianprior}). The maximum-likelihood estimator
converges to an $\ell_\infty$-error of $.05$ for most graphs after having
observed at least $100M$ distinct cascade-steps. Given the number of parameters
to be learned and the number of cascades to process, we had to resort to
Variational Inference.
We therefore used the Blocks~\cite{blocks} framework, written on top of Theano
for automatic differentiation and highly efficient SGD routines with minimal
memory overhead. By encoding a cascade step as a tuple of two binary vectors,
one for the infected nodes and one for the susceptible nodes, the variational
inference objective can be written as a simple computation graph on matrices
that Theano compiles to machine code optimized for GPU execution.
\paragraph{Synthetic graph.} Since intuitively if nodes are exchangeable
(\emph{i.e} their topological properties like degree are identical) in our
graph, the active learning policy will make little difference compared to the
uniform-source policy, we decided to test our algorithms on an unbalanced graph
$\mathcal{G}_A$ whose adjacency matrix $A$ is as follows:
\begin{equation*}
A = \left( \begin{array}{cccccc}
0 & 1 & 1 & 1 & \dots & 1 \\
0 & 0 & 1 & 0 & \dots & 0 \\
0 & 0 & 0 & 1 & \dots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 1 & 0 & 0 & \dots & 0
\end{array}
\right)
\end{equation*}
In other words, graph $\mathcal{G}_A$ is a wheel graph where every node, except
for the center node, points to its (clock-wise) neighbor.
\paragraph{Baseline.} In order for the baseline to be fair, we choose to create
cascades starting from the source node on the fly both in the case of the
uniform source and for the active learning policy. Each cascade is therefore
`observed' only once. We plot the RMSE of the graph i.e. $RMSE^2
= \frac{1}{n^2} \|\hat{\mathbf{\Theta}} - \mathbf{\Theta}\|^2_2$. The results
are presented in
\begin{figure}
\centering
\label{fig:graphical}
\includegraphics[scale=.4]{finale.png}
\caption{}
\end{figure}
|