diff options
Diffstat (limited to 'paper/sections/intro.tex')
| -rw-r--r-- | paper/sections/intro.tex | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/paper/sections/intro.tex b/paper/sections/intro.tex index 5f507d9..8edf341 100644 --- a/paper/sections/intro.tex +++ b/paper/sections/intro.tex @@ -14,7 +14,7 @@ The study of edge prediction in graph has been an active field of research for o The work of \cite{Abrahao:13} study the same continous-model framework as \cite{GomezRodriguez:2010} and obtain a ${\cal O}(\Delta^9 \log^2 \Delta \log m)$ support recovery algorithm. Their work also studies the information leveraged by different `parts' of the cascade, showing that a surprisingly important amount of information can be gleaned from the first `infections' of the cascade. We will reach a similar conclusion in section~\ref{sec:assumptions}. However, their model supposes a single-source model, where only one source is selected at random, which is less realistic in practice. Oftentimes, the `patient 0' is unknown to us, and a multi-source model intuitively models the more common situation of a delayed observation of the cascade. -The recent work of \cite{Daneshmand:2014} is very similar to our own, in fact we only grew aware of their contribution at a late stage in the writing of this paper. Similarly to our approach, they consider a $\ell1$-norm penalty to their objective function, adapting standard results from sparse recovery to obtain a ${\cal O}(\Delta^3 \log m)$ algorithm under an irrepresentability condition. With stronger assumptions, they match the \cite{Netrapalli:2012} bound of ${\cal O}(\Delta^2 \log m)$, by exploiting a similar proof technique based around the KKT conditions of the objective function. Their work has the merit of studying a general framework of continuous functions. However, they also place themselves in the restrictive single-source context. +The recent work of \cite{Daneshmand:2014} is very similar to our own, they consider a $\ell1$-norm penalty to their objective function, adapting standard results from sparse recovery to obtain a ${\cal O}(\Delta^3 \log m)$ algorithm under an irrepresentability condition. With stronger assumptions, they match the \cite{Netrapalli:2012} bound of ${\cal O}(\Delta^2 \log m)$, by exploiting a similar proof technique based around the KKT conditions of the objective function. Their work has the merit of studying a general framework of continuous functions. Similarly to \cite{Abrahao:13}, they place themselves in the restrictive single-source context. \paragraph{Our contributions} -Though our work follows closely the spirit of \cite{Netrapalli:2012} and \cite{Daneshmand:2014}, we believe we provide several significant improvements to their work. We study sparse recovery under less restrictive assumptions and obtain the first, to the best of the authors' knowledge, ${\cal O}(\Delta \log m)$ algorithm for graph inference from cascades, which is tight to a certain extent. We also study the seemingly overlooked problem of weight recovery as well as the setting of the relaxed sparsity setting. We study the case of the two best known diffusion processes for simplicity as outlined in \cite{Kempe:03}, but many arguments can be extended to more general diffusion processes. +Though our work follows closely the spirit of \cite{Netrapalli:2012} and \cite{Daneshmand:2014}, we believe we provide several significant improvements to their work. We study sparse recovery under less restrictive assumptions and obtain the first ${\cal O}(\Delta \log m)$ algorithm for graph inference from cascades. We also study the seemingly overlooked problem of weight recovery as well as the setting of the relaxed sparsity setting. Finally, we show these results are almost tight, by proving in section~\ref{sec:lowerbound} the first lower bound on the number of observations required to recover the edges and the edge weights of a graph in the general case. We study the case of the two best known diffusion processes for simplicity as outlined in \cite{Kempe:03}, but many arguments can be extended to more general diffusion processes. |
