diff options
Diffstat (limited to 'paper/sections/discussion.tex')
| -rw-r--r-- | paper/sections/discussion.tex | 22 |
1 files changed, 19 insertions, 3 deletions
diff --git a/paper/sections/discussion.tex b/paper/sections/discussion.tex index 25d9c53..b2947a1 100644 --- a/paper/sections/discussion.tex +++ b/paper/sections/discussion.tex @@ -1,7 +1,15 @@ \paragraph{Future Work} -Solving the Graph Inference problem with sparse recovery techniques opens new venues for future work. Firstly, the sparse recovery literature has already studied regularization patterns beyond the $\ell-1$-norm, notably the thresholded and adaptive lasso \cite{vandegeer:2011} \cite{Zou:2006}. Another series of papers that are directly relevant to the Graph Inference setting have shown that confidence intervals can be established for the lasso. Finally, the linear threshold model is a commonly studied diffusion process and can also be cast as a \emph{generalized linear cascade} with inverse link function $z \mapsto \mathbbm{1}_{z > 0}$: +Solving the Graph Inference problem with sparse recovery techniques opens new +venues for future work. Firstly, the sparse recovery literature has already +studied regularization patterns beyond the $\ell_1$-norm, notably the +thresholded and adaptive lasso \cite{vandegeer:2011, Zou:2006}. Another goal +would be to obtain confidence intervals for our estimator, similarly to what +has been obtained for the Lasso in the recent series of papers +\cite{javanmard2014, zhang2014}. + +Finally, the linear threshold model is a commonly studied diffusion process and can also be cast as a \emph{generalized linear cascade} with inverse link function $z \mapsto \mathbbm{1}_{z > 0}$: \begin{equation} \label{eq:lt} @@ -9,7 +17,15 @@ Solving the Graph Inference problem with sparse recovery techniques opens new ve X^{t+1}_j = \text{sign} \left(\inprod{\theta_j}{X^t} - t_j \right) \end{equation} -This model therefore falls into the 1-bit compressed sensing model \cite{Boufounos:2008} framework. Several recent papers study the theoretical guarantees obtained for 1-bit compressed sensing with specific measurements \cite{Gupta:2010}, \cite{Plan:2014}. Whilst they obtained bounds of the order ${\cal O}(n \log \frac{n}{s}$), no current theory exists for recovering positive bounded signals from bernoulli hyperplanes. This research direction may provide the first clues to solve the ``active learning'' problem: if we are allowed to adaptively \emph{choose} the source nodes at the beginning of each cascade, can we improve on current results? +This model therefore falls into the 1-bit compressed sensing model +\cite{Boufounos:2008} framework. Several recent papers study the theoretical +guarantees obtained for 1-bit compressed sensing with specific measurements +\cite{Gupta:2010, Plan:2014}. Whilst they obtained bounds of the order +${\cal O}(n \log \frac{n}{s}$), no current theory exists for recovering +positive bounded signals from bernoulli hyperplanes. This research direction +may provide the first clues to solve the ``adaptive learning'' problem: if we +are allowed to adaptively \emph{choose} the source nodes at the beginning of +each cascade, how much can we improve the current results? \begin{comment} The Linear Threshold model can \emph{also} be cast a generalized linear cascade model. However, as we show below, its link function is non-differentiable and necessitates a different analysis. In the Linear Threshold Model, each node $j\in V$ has a threshold $t_j$ from the interval $[0,1]$ and for each node, the sum of incoming weights is less than $1$: $\forall j\in V$, $\sum_{i=1}^m \Theta_{i,j} \leq 1$. @@ -29,4 +45,4 @@ where we defined again $\theta_j\defeq (\Theta_{1,j},\ldots,\Theta_{m,j})$. In o \end{equation} The link function of the linear threshold model is the sign function: $z \mapsto \mathbbm{1}_{z > 0}$. This model therefore falls into the 1-bit compressed sensing model \cite{Boufounos:2008} framework. Several recent papers study the theoretical guarantees obtained for 1-bit compressed sensing with specific measurements \cite{Gupta:2010}, \cite{Plan:2014}. Whilst they obtained bounds of the order ${\cal O}(n \log \frac{n}{s}$), no current theory exists for recovering positive bounded signals from bernoulli hyperplanes. We leave this research direction to future work. -\end{comment}
\ No newline at end of file +\end{comment} |
