1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
|
\documentclass{article}
\usepackage{fullpage, amsmath, amssymb}
\title{Extensions}
\begin{document}
\section{Gram matrix}
In the ICML paper, we showed that the graph could be recovered as long as the
expected gram matrix $\mathcal{G}$ of observations had its restricted
eigenvalues lower bounded away from $0$. Recall that the gram matrix $G$ is
given by $X X^T$, where $X$ is an $n\times m$ matrix, where $n$ is the number of
measurements and $m$ is the number of nodes and if $X_i$ is a column vector
indicating the nodes infected at step $i$, then the rows of $X$ are given by
$X_i^T$. It follows that $$\mathcal{G} = \mathbb{E}[\sum_i X_i X_i^T]$$
Note that the indices of the sum are themselves random variables! Furthermore,
recall the definition of a restricted eigenvalue as $\|X_{S^c}\|_1 \leq 3
\|X_S\|_1$.
\subsection{Voter model}
In the voter model, $\mathbb{P}[X_j^{t+1} = 1 | X^t] = \sum_{i=1}^m A_{i,j}
X^t_i$, where $A$ is the weighted adjacency matrix of the graph. Furthermore,
the model runs indefinitely until time $T$, a hyperparameter of the model.
Therefore, $\mathbb{E}[X_{n+1} | X_n] = A^T X_n$ and $\mathcal{G} = \sum_i A^i
\mathbb{E}[X_0 X_0^T] (A^T)^i$. In the case of the single-source model,
$\mathbb{E}[X_0X_0^T] = \frac{1}{m} I_m$ and it follows that $$\mathcal{G} =
\frac{1}{m} \sum_{i=1}^T A^i (A^T)^i$$
\end{document}
|