aboutsummaryrefslogtreecommitdiffstats
path: root/paper/sections
diff options
context:
space:
mode:
Diffstat (limited to 'paper/sections')
-rw-r--r--paper/sections/results.tex2
1 files changed, 1 insertions, 1 deletions
diff --git a/paper/sections/results.tex b/paper/sections/results.tex
index 8787241..4e129fc 100644
--- a/paper/sections/results.tex
+++ b/paper/sections/results.tex
@@ -1,7 +1,7 @@
In this section, we exploit standard techniques in sparse recovery and leverage the simple nature of Generalized Linear models to address the standard problem of edge detection as well as the less frequently studied problem of coefficient estimation. We discuss both standard diffusion processes, and extend our analysis beyond sparse graphs to approximately sparse graphs.
\paragraph{Recovering Edges vs. Recovering Coefficients}
-Recovering the edges of the graph or estimating the parents of a node comes down to recovering the support (non-zero coefficients) of $\Theta$, a process known as {\it variable selection}. However, there have been a series of papers arguing that the Lasso is an inappropriate variable selection method (see H.Zou and T.Hastie, Sarah van de Geer ...). In fact, the irrepresentability condition (discussed in and was introduced in ) is essentially necessary for variable selection and rarely holds in practical situations where correlation between variable occurs. We defer an extended analysis of this situation to Section~\ref{sec:assumptions}.
+Recovering the edges of the graph or estimating the parents of a node comes down to recovering the support (non-zero coefficients) of $\Theta$, a process known as {\it variable selection}. However, there have been a series of papers arguing that the Lasso is an inappropriate variable selection method (see H.Zou and T.Hastie, Sarah van de Geer ...). In fact, the irrepresentability condition (discussed in \cite{Daneshmand:2014} and introduced in \cite{Zhao:2006}) is essentially necessary for variable selection and rarely holds in practical situations where correlation between variable occurs. We defer an extended analysis of this situation to Section~\ref{sec:assumptions}.
Our approach is different. Rather than trying to perform variable selection by finding $\{j: \theta_j \neq 0\}$, we seek to upper-bound $\|\hat \theta - \theta^* \|_2$. We first apply standard techniques to obtain ${\cal O}(\sqrt{\frac{s \log m}{n}})$ in the case of sparse vectors, which is tight to a certain extent as we will show in Section ???.