diff options
Diffstat (limited to 'paper/sections/discussion.tex')
| -rw-r--r-- | paper/sections/discussion.tex | 37 |
1 files changed, 29 insertions, 8 deletions
diff --git a/paper/sections/discussion.tex b/paper/sections/discussion.tex index 52eb6c5..03e7ff2 100644 --- a/paper/sections/discussion.tex +++ b/paper/sections/discussion.tex @@ -1,12 +1,14 @@ Solving the Graph Inference problem with sparse recovery techniques opens new venues for future work. Firstly, the sparse recovery literature has already studied regularization patterns beyond the $\ell_1$-norm, notably the -thresholded and adaptive lasso \cite{vandegeer:2011, Zou:2006}. Another goal +thresholded and adaptive lasso~\cite{vandegeer:2011, Zou:2006}. Another goal would be to obtain confidence intervals for our estimator, similarly to what has been obtained for the Lasso in the recent series of papers \cite{javanmard2014, zhang2014}. -Finally, the linear threshold model is a commonly studied diffusion process and can also be cast as a \emph{generalized linear cascade} with inverse link function $z \mapsto \mathbbm{1}_{z > 0}$: +Finally, the linear threshold model is a commonly studied diffusion process and +can also be cast as a \emph{generalized linear cascade} with inverse link +function $z \mapsto \mathbbm{1}_{z > 0}$: $ \label{eq:lt} X^{t+1}_j = \text{sign} \left(\inprod{\theta_j}{X^t} - t_j \right) @@ -16,21 +18,33 @@ This model therefore falls into the 1-bit compressed sensing framework guarantees obtained for 1-bit compressed sensing with specific measurements \cite{Gupta:2010, Plan:2014}. Whilst they obtained bounds of the order ${\cal O}(n \log \frac{m}{s}$), no current theory exists for recovering -positive bounded signals from biniary measurememts. This research direction +positive bounded signals from binary measurememts. This research direction may provide the first clues to solve the ``adaptive learning'' problem: if we are allowed to adaptively \emph{choose} the source nodes at the beginning of each cascade, how much can we improve the current results? \begin{comment} -The Linear Threshold model can \emph{also} be cast a generalized linear cascade model. However, as we show below, its link function is non-differentiable and necessitates a different analysis. In the Linear Threshold Model, each node $j\in V$ has a threshold $t_j$ from the interval $[0,1]$ and for each node, the sum of incoming weights is less than $1$: $\forall j\in V$, $\sum_{i=1}^m \Theta_{i,j} \leq 1$. +The Linear Threshold model can \emph{also} be cast a generalized linear cascade +model. However, as we show below, its link function is non-differentiable and +necessitates a different analysis. In the Linear Threshold Model, each node +$j\in V$ has a threshold $t_j$ from the interval $[0,1]$ and for each node, the +sum of incoming weights is less than $1$: $\forall j\in V$, $\sum_{i=1}^m +\Theta_{i,j} \leq 1$. -Nodes have two states: susceptible or active. At each time step, each susceptible -node $j$ becomes active if the sum of the incoming weights from active parents is greater than $j$'s threshold $t_j$. Nodes remain active until the end of the cascade, which is reached when no new susceptible nodes become active. As such, the source nodes are chosen, the process is entirely deterministic. Let $X^t$ be the indicator variable of the set of active nodes at time step $t-1$, then: +Nodes have two states: susceptible or active. At each time step, each +susceptible node $j$ becomes active if the sum of the incoming weights from +active parents is greater than $j$'s threshold $t_j$. Nodes remain active until +the end of the cascade, which is reached when no new susceptible nodes become +active. As such, the source nodes are chosen, the process is entirely +deterministic. Let $X^t$ be the indicator variable of the set of active nodes +at time step $t-1$, then: \begin{equation} X^{t+1}_j = \mathbbm{1}_{\sum_{i=1}^m \Theta_{i,j}X^t_i > t_j} = \mathbbm{1}_{\inprod{\theta_j}{X^t} > t_j} \end{equation} -where we defined again $\theta_j\defeq (\Theta_{1,j},\ldots,\Theta_{m,j})$. In other words, for every step in the linear threshold model, we observe the following signal: +where we defined again $\theta_j\defeq (\Theta_{1,j},\ldots,\Theta_{m,j})$. In +other words, for every step in the linear threshold model, we observe the +following signal: \begin{equation} \label{eq:lt} @@ -38,5 +52,12 @@ where we defined again $\theta_j\defeq (\Theta_{1,j},\ldots,\Theta_{m,j})$. In o X^{t+1}_j = \text{sign} \left(\inprod{\theta_j}{X^t} - t_j \right) \end{equation} -The link function of the linear threshold model is the sign function: $z \mapsto \mathbbm{1}_{z > 0}$. This model therefore falls into the 1-bit compressed sensing model \cite{Boufounos:2008} framework. Several recent papers study the theoretical guarantees obtained for 1-bit compressed sensing with specific measurements \cite{Gupta:2010}, \cite{Plan:2014}. Whilst they obtained bounds of the order ${\cal O}(n \log \frac{n}{s}$), no current theory exists for recovering positive bounded signals from bernoulli hyperplanes. We leave this research direction to future work. +The link function of the linear threshold model is the sign function: $z +\mapsto \mathbbm{1}_{z > 0}$. This model therefore falls into the 1-bit +compressed sensing model \cite{Boufounos:2008} framework. Several recent papers +study the theoretical guarantees obtained for 1-bit compressed sensing with +specific measurements \cite{Gupta:2010}, \cite{Plan:2014}. Whilst they obtained +bounds of the order ${\cal O}(n \log \frac{n}{s}$), no current theory exists +for recovering positive bounded signals from bernoulli hyperplanes. We leave +this research direction to future work. \end{comment} |
