diff options
Diffstat (limited to 'finale/sections/experiments.tex')
| -rw-r--r-- | finale/sections/experiments.tex | 23 |
1 files changed, 21 insertions, 2 deletions
diff --git a/finale/sections/experiments.tex b/finale/sections/experiments.tex index 169554f..8e5a9eb 100644 --- a/finale/sections/experiments.tex +++ b/finale/sections/experiments.tex @@ -23,9 +23,28 @@ one for the susceptible nodes, the variational inference objective can be written as a sum of two matrix multiplications, which Theano optimizes for on GPU. -baseline +Since intuitively if nodes are exchangeable in our graph, the active learning +policy will have little impact over the uniform-source policy, we decided to +test our algorithms on an unbalanced graph $\mathcal{G}_A$ whose adjacency +matrix $A$ is as follows: +\begin{equation*} +A = \left( \begin{array}{cccccc} +0 & 1 & 1 & 1 & \dots & 1 \\ +0 & 0 & 1 & 0 & \dots & 0 \\ +0 & 0 & 0 & 1 & \dots & 0 \\ +\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ +0 & 1 & 0 & 0 & \dots & 0 +\end{array} +\right) +\end{equation*} -fair comparison of online learning +In other words, graph $\mathcal{G}_A$ is a star graph where every node, except +for the center node, points to its (clock-wise) neighbor. In order for the +baseline to be fair, we choose to create cascades starting from the source node +on the fly both in the case of the uniform source and for the active learning +policy. Each cascade is therefore `observed' only once. We plot the RMSE of the +graph i.e. $RMSE^2 = \frac{1}{n^2} \|\hat \mathbf{\Theta} - +\mathbf{\Theta}\|^2_2$. graphs/datasets |
