aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections/discussion.tex
blob: 2f0fd3617e85f61b045f5955cdb75a634aa85040 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
Solving the Graph Inference problem with sparse recovery techniques opens new
venues for future work. Firstly, the sparse recovery literature has already
studied regularization patterns beyond the $\ell_1$-norm, notably the
thresholded and adaptive lasso~\cite{vandegeer:2011, Zou:2006}. Another goal
would be to obtain confidence intervals for our estimator, similarly to what
has been obtained for the Lasso in the recent series of papers
\cite{javanmard2014, zhang2014}.

Finally, the linear threshold model is a commonly studied diffusion process and
can also be cast as a \emph{generalized linear cascade} with inverse link
function $z \mapsto \mathbbm{1}_{z > 0}$:
$
    \label{eq:lt}
    X^{t+1}_j = \text{sign} \left(\inprod{\theta_j}{X^t} - t_j \right)
    $.
This model therefore falls into the 1-bit compressed sensing framework
\cite{Boufounos:2008}. Several recent papers study the theoretical
guarantees obtained for 1-bit compressed sensing with specific measurements
\cite{Gupta:2010, Plan:2014}. Whilst they obtained bounds of the order
${\cal O}(s \log \frac{m}{s}$), no current theory exists for recovering
positive bounded signals from binary measurememts. This research direction
may provide the first clues to solve the ``adaptive learning'' problem: if we
are allowed to adaptively \emph{choose} the source nodes at the beginning of
each cascade, how much can we improve the current results?

\begin{comment}
The Linear Threshold model can \emph{also} be cast a generalized linear cascade
model. However, as we show below, its link function is non-differentiable and
necessitates a different analysis. In the Linear Threshold Model, each node
$j\in V$ has a threshold $t_j$ from the interval $[0,1]$ and for each node, the
sum of incoming weights is less than $1$: $\forall j\in V$, $\sum_{i=1}^m
\Theta_{i,j} \leq 1$. 

Nodes have two states: susceptible or active. At each time step, each
susceptible node $j$ becomes active if the sum of the incoming weights from
active parents is greater than $j$'s threshold $t_j$. Nodes remain active until
the end of the cascade, which is reached when no new susceptible nodes become
active. As such, the source nodes are chosen, the process is entirely
deterministic. Let $X^t$ be the indicator variable of the set of active nodes
at time step $t-1$, then:
\begin{equation}
    X^{t+1}_j = \mathbbm{1}_{\sum_{i=1}^m \Theta_{i,j}X^t_i > t_j}
    = \mathbbm{1}_{\inprod{\theta_j}{X^t} > t_j}
\end{equation}
where we defined again $\theta_j\defeq (\Theta_{1,j},\ldots,\Theta_{m,j})$. In
other words, for every step in the linear threshold model, we observe the
following signal:

\begin{equation}
    \label{eq:lt}
    \tag{LT}
    X^{t+1}_j = \text{sign} \left(\inprod{\theta_j}{X^t} - t_j \right)
\end{equation}

The link function of the linear threshold model is the sign function: $z
\mapsto \mathbbm{1}_{z > 0}$. This model therefore falls into the 1-bit
compressed sensing model \cite{Boufounos:2008} framework. Several recent papers
study the theoretical guarantees obtained for 1-bit compressed sensing with
specific measurements \cite{Gupta:2010}, \cite{Plan:2014}. Whilst they obtained
bounds of the order ${\cal O}(n \log \frac{n}{s}$), no current theory exists
for recovering positive bounded signals from bernoulli hyperplanes. We leave
this research direction to future work.
\end{comment}