\subsection{The Irrepresentability Condition} \cite{Daneshmand:2014} rely on an `incoherence' condition on the hessian of the likelihood function. It is in fact easy to see that their condition is equivalent to the more commonly called {\it (S, s)-irrepresentability} condition: \begin{definition} Following similar notation, let $Q^* \nabla^2 f(\theta^*)$. Let $Q_{S^C,S} XXX$, the {\it (S, s)-irrepresentability} condition is defined as: \begin{equation} blabla \end{equation} \end{definition} If our objective is to recover the support of the graph exactly, the irrepresentability condition has been shown to be essentially necessary \cite{Zhao:2006}. However, several recent papers \cite{vandegeer:2011}, \cite{Zou:2006}, argue this condition is unrealistic in situations where there is a correlation between variables. Consider for example the following matrix: Yet, assuming we only wish to recover all edges above a certain threshold, bounding the $\ell2$-error allows us to recover all edges with weights above a certain minimum threshold under an intuitively weaker {\bf(RE)} condition. In practical scenarios, such as in social networks, where one seeks to recover significant edges, this is a reasonable assumption. As mentioned previously, it is intuitive that the irrepresentability condition is stronger than our suggested {\bf(RE)} assumption. In fact, by adapting slightly a result from \cite{vandegeer:2009}, we can show that a `strong' irrepresentability condition directly {\it implies} the {\bf(RE)} condition for $\ell2$-recovery: \begin{proposition} \label{prop:irrepresentability} If the irrepresentability condition holds with $\epsilon > \frac{2}{3}$, then the restricted eigenvalue condition holds with constant $\gamma_n \geq \frac{ (1 - 3(1 -\epsilon))^2 \lambda_{\min}^2}{4s}n$, where $\lambda_{\min} > 0$ is the smallest eigenvalue of $Q^*_{S,S}$, on which the results of \cite{Daneshmand:2014} also depend. \end{proposition} \subsection{The Restricted Eigenvalue Condition} Expressing the restricted eigenvalue assumption for correlated measurements as parameters of the graph and the cascade diffusion process is non-trivial. The restricted eigenvalue property is however well behaved in the following sense: \begin{lemma} \label{lem:expected_hessian} Expected hessian analysis! \end{lemma} This result is easily proved by adapting slightly a result from \cite{vandegeer:2009} XXX. Similarly to the analysis conducted in \cite{Daneshmand:2014}, we can show that if the eigenvalue can be showed to hold for the `expected' hessian, it can be showed to hold for the hessian itself. It is easy to see that: \begin{proposition} \label{prop:expected_hessian} If result holds for the expected hessian, then it holds for the hessian! \end{proposition} It is most likely possible to remove this extra s factor. See sub-gaussian paper by ... but the calculations are more involved.