diff options
Diffstat (limited to 'paper/sections/intro.tex')
| -rw-r--r-- | paper/sections/intro.tex | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/paper/sections/intro.tex b/paper/sections/intro.tex index 0f42855..9da8b3d 100644 --- a/paper/sections/intro.tex +++ b/paper/sections/intro.tex @@ -58,15 +58,15 @@ The organization of the paper is as follows: ... \paragraph{Related Work} -The study of edge prediction in graph has been an active field of research for over a decade. \cite{GomezRodriguez:2010} was one of the first papers to study graph prediction from cascades. They introduce the {\scshape netinf} algorithm, which approximates the likelihood of cascades represented as a continuous process. The algorithm was later improved/modified in later work. Beside validation on generic networks, {\scshape netinf} is not known to have any theoretical recovery guarantees. \cite{Netrapalli:2012} studied solely the independent cascade model and obtained the first ${\cal O}(\Delta^2 \log m)$ guarantee on general networks. The algorithm is based around the same likelihood function we suggest, without the $\ell1$-norm penalty. However, the analysis depended strongly on a restrictive {\it correlation decay} assumption, which strongly restricts the number of new infections at every step. In this restricted setting, they show a complex lower bound, which is roughly $\Omega(\Delta \log (m/\Delta))$ lower bound for perfect support recovery with constant probability. +The study of edge prediction in graph has been an active field of research for over a decade. \cite{GomezRodriguez:2010} was one of the first papers to study graph prediction from cascades. They introduce the {\scshape netinf} algorithm, which approximates the likelihood of cascades represented as a continuous process. The algorithm was later improved/modified in later work. Beside validation on generic networks, {\scshape netinf} is not known to have any theoretical recovery guarantees. \cite{Netrapalli:2012} studied solely the independent cascade model and obtained the first ${\cal O}(s^2 \log m)$ guarantee on general networks. The algorithm is based around the same likelihood function we suggest, without the $\ell1$-norm penalty. However, the analysis depended strongly on a restrictive {\it correlation decay} assumption, which strongly restricts the number of new infections at every step. In this restricted setting, they show a complex lower bound of the number of cascades needed for perfect support recovery with constant probability of the order $\Omega(s \log (m/s))$. -The work of \cite{Abrahao:13} study the same continuous-model framework as \cite{GomezRodriguez:2010} and obtain a ${\cal O}(\Delta^9 \log^2 \Delta \log m)$ support recovery algorithm. Their work also studies the information leveraged by different `parts' of the cascade, showing that a surprisingly important amount of information can be gleaned from the first `infections' of the cascade. We will reach a similar conclusion in section~\ref{sec:assumptions}. However, their model supposes a single-source model, where only one source is selected at random, which is less realistic in practice. Oftentimes, the `patient 0' is unknown to us, and a multi-source model intuitively models the more common situation of a delayed observation of the cascade. +The work of \cite{Abrahao:13} study the same continuous-model framework as \cite{GomezRodriguez:2010} and obtain a ${\cal O}(s^9 \log^2 s \log m)$ support recovery algorithm. Their work also studies the information leveraged by different `parts' of the cascade, showing that a surprisingly important amount of information can be gleaned from the first `infections' of the cascade. We will reach a similar conclusion in section~\ref{sec:assumptions}. However, their model supposes a single-source model, where only one source is selected at random, which is less realistic in practice. Oftentimes, the `patient 0' is unknown to us, and a multi-source model intuitively models the more common situation of a delayed observation of the cascade. Closest to this work is a recent paper by \cite{Daneshmand:2014}, they consider a $\ell_1$-norm penalty to their objective function, adapting standard results -from sparse recovery to obtain a ${\cal O}(\Delta^3 \log m)$ algorithm under an +from sparse recovery to obtain a ${\cal O}(s^3 \log m)$ algorithm under an irrepresentability condition. With stronger assumptions, they match the -\cite{Netrapalli:2012} bound of ${\cal O}(\Delta^2 \log m)$, by exploiting +\cite{Netrapalli:2012} bound of ${\cal O}(s^2 \log m)$, by exploiting a similar proof technique based around the KKT conditions of the objective function. Their work has the merit of studying a general framework of continuous functions. Similarly to \cite{Abrahao:13}, they place themselves in @@ -76,5 +76,5 @@ TODO: add related works on lower bounds. \begin{comment} \paragraph{Our contributions} -Though our work follows closely the spirit of \cite{Netrapalli:2012} and \cite{Daneshmand:2014}, we believe we provide several significant improvements to their work. We study sparse recovery under less restrictive assumptions and obtain the first ${\cal O}(\Delta \log m)$ algorithm for graph inference from cascades. We also study the seemingly overlooked problem of weight recovery as well as the setting of the relaxed sparsity setting. Finally, we show these results are almost tight, by proving in section~\ref{sec:lowerbound} the first lower bound on the number of observations required to recover the edges and the edge weights of a graph in the general case. We study the case of the two best known diffusion processes for simplicity as outlined in \cite{Kempe:03}, but many arguments can be extended to more general diffusion processes. +Though our work follows closely the spirit of \cite{Netrapalli:2012} and \cite{Daneshmand:2014}, we believe we provide several significant improvements to their work. We study sparse recovery under less restrictive assumptions and obtain the first ${\cal O}(s \log m)$ algorithm for graph inference from cascades. We also study the seemingly overlooked problem of weight recovery as well as the setting of the relaxed sparsity setting. Finally, we show these results are almost tight, by proving in section~\ref{sec:lowerbound} the first lower bound on the number of observations required to recover the edges and the edge weights of a graph in the general case. We study the case of the two best known diffusion processes for simplicity as outlined in \cite{Kempe:03}, but many arguments can be extended to more general diffusion processes. \end{comment} |
