aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--finale/mid_report.tex112
1 files changed, 80 insertions, 32 deletions
diff --git a/finale/mid_report.tex b/finale/mid_report.tex
index 9873abd..d22a557 100644
--- a/finale/mid_report.tex
+++ b/finale/mid_report.tex
@@ -46,30 +46,72 @@
The Network Inference Problem (NIP) is the machine learning challenge of
recovering the edges and edge weights of an unknown weighted graph from the
observations of a random contagion process propagating over this graph.
- While estimators with provable convergence rate guarantees have been
- obtained under various formulations of the NIP, a rigorous statistical
- treatment of the problem is still lacking. In this work, we build upon the
- unified NIP formulation of [] to explore the connections between the
- topological properties of the graph to be learnt and the resulting quality
- of the estimators. Specifically, we analyze which properties of the graph
- render NIP unfeasible or hard, and which properties can be exploited to
- improve the quality of the estimators.
+ While estimators for the edge weights with provable convergence guarantees
+ have been obtained under various formulations of the NIP, a rigorous
+ statistical treatment of the problem is still lacking. In this work, we
+ build upon the unified NIP formulation of \cite{pouget} to explore the
+ connections between the topological properties of the graph to be learnt
+ and the resulting quality of the estimators. Specifically, we analyze which
+ properties of the graph render NIP unfeasible or hard, and which properties
+ can be exploited to improve the quality of the estimators.
\end{abstract}
\section{Introduction}
-Central question: \emph{relating properties of the graph to the difficulty of
-doing graph inference from cascades.}
+A random contagion process propagating over a graph provides valuable
+information about the presence and weights of its edges. This fact is exploited
+in many real-life applications, including learning how people influence each
+other by observing how they communicate or share information in social
+networks, learning protein activation graphs in biology or learning correlations
+between financial assets.
-Two subquestions:
+Formally, this problem can be formulated as a machine learning task where one
+specifies a probabilistic contagion model whose parameters are the edge weights
+of the unknown graph. Then, given observations drawn from this random process,
+one seeks to construct an estimator of the edge weights. This problem is known
+in the literature as the Network Inference Problem
+
+A growing body of works has focused on obtaining convergence guarantees for
+estimators of the edge weights under various formulations of the Network
+Inference Problem. In \cite{pouget}, we noted that many of these formulations
+could be interpreted as specific instances of a common \emph{Generalized Linear
+Cascade Model} (GLMC) and showed how to apply results from the sparse recovery
+literature to obtain close-to-optimal convergence guarantees.
+
+However, in \cite{pouget} as in previous works, the convergence guarantees rely
+on many assumptions about the non-degeneracy of the probabilistic model or the
+observations. These assumptions are hard to interpret and weaken possible
+applications of the obtained results to real-life applications. Furthermore,
+previous works focused on obtaining generic convergence guarantees and it is
+not obvious that the developed frameworks can be adapted in presence of domain
+specific knowledge about the graph to be learned. The goal of the current work
+is to address these shortcomings by asking the following question: \emph{how do
+ properties of the graph to be learned impact the task of solving the
+ Network Inference Problem for this graph?}. Specifically, we wish to
+ understand:
\begin{itemize}
- \item which properties of the graph make it impossible or hard to learn
- (identifiability, convergence rate)?
- \item which properties of the graph can be exploited to learn better or
- faster (use a Bayesian prior)?
+ \item which properties of the graph render the Network Inference Problem
+ unfeasible or hard to solve. This relates to the question of
+ identifiability of the contagion model, and understanding how the
+ constants which appear on convergence rate bounds for edge weights
+ estimators relate to topological properties of the graph.
+ \item how domain specific knowledge about the graph to be learned can be
+ exploited to make the Network Inference Problem easier to solve.
+ A natural framework for this is a Bayesian one, where the domain
+ specific knowledge is encoded as a Bayesian prior. To the best of our
+ knowledge, this approach has not been explored beyond MAP estimation
+ under a sparse (Laplace) prior in the context of discrete-time models.
\end{itemize}
-\subsection*{Related work}
+The organization of the current report is as follows: we conclude the
+introduction with a review of related works. The Generalized Linear Cascade
+(GLC) model from \cite{pouget} is presented in Section 2. Section 3. explores
+the question of identifiablity of the model. Section 4. presents both the
+frequentist and Bayesian approach for inference of the GLC model as well as
+a novel active learning formulation of the problem. The code used in our
+experiments can be found in the Appendix.
+
+\paragraph{Related works.}
The study of edge prediction in graphs has been an active field of research for
over a decade~\cite{Nowell08, Leskovec07, AdarA05}.~\cite{GomezRodriguez:2010}
introduced the {\scshape Netinf} algorithm, which approximates the likelihood of
@@ -77,21 +119,27 @@ cascades represented as a continuous process. The algorithm, relying on
Maximum-Likelihood and convex relaxations was improved in later
work~\cite{gomezbalduzzi:2011}, but is not known to have any theoretical
guarantees beside empirical validation on synthetic networks.
-\cite{Netrapalli:2012} studied the discrete-time version of the independent
-cascade model and obtained the first ${\cal O}(s^2 \log m)$ recovery guarantee
-on general networks, by using unregularized MLE and making a {\it correlation
-decay\/} assumption, which limits the number of new infections at every step.
-They also suggest a {\scshape Greedy} algorithm, with provably good performance
-in tree graphs, which maintains a counter of each node's antecedents, i.e. the
-set of nodes infected at a prior time step to its infection time. The work
-of~\cite{Abrahao:13} studies the same continuous-model framework as
-\cite{GomezRodriguez:2010} and obtains an ${\cal O}(s^9 \log^2 s \log m)$
-support recovery algorithm, without the \emph{correlation decay} assumption.
-\cite{du2013uncover,Daneshmand:2014} propose Lasso regularization of MLE for
-recovering the weights of the graph under a continuous-time independent cascade
-model. The work of~\cite{du2014influence} is slightly orthogonal to ours since
-they suggest learning the \emph{influence} function, rather than the parameters
-of the network directly.
+
+Let us denote by $m$ the number of nodes in the graph and for a given node, we
+denote by $s$ its in-degree. In what follows, we call a \emph{recovery
+guarantee} an upper bound on the number of observations required (expressed as
+a function of $s$ and $m$) to learn the incoming edges of a node to a fixed
+arbitrary accuracy level. \cite{Netrapalli:2012} studied the discrete-time
+version of the independent cascade model and obtained the first ${\cal O}(s^2
+\log m)$ recovery guarantee on general networks, by using unregularized MLE and
+making a {\it correlation decay\/} assumption, which limits the number of new
+infections at every step. They also suggest a {\scshape Greedy} algorithm,
+with provably good performance in tree graphs, which maintains a counter of
+each node's antecedents, i.e. the set of nodes infected at a prior time step to
+its infection time. The work of~\cite{Abrahao:13} studies the same
+continuous-model framework as \cite{GomezRodriguez:2010} and obtains an ${\cal
+O}(s^9 \log^2 s \log m)$ support recovery algorithm, without the
+\emph{correlation decay} assumption. \cite{du2013uncover,Daneshmand:2014}
+propose Lasso regularization of MLE for recovering the weights of the graph
+under a continuous-time independent cascade model. The work
+of~\cite{du2014influence} is slightly orthogonal to ours since they suggest
+learning the \emph{influence} function, rather than the parameters of the
+network directly.
More recently, the continuous process studied in previous work has been
reformulated as a Hawkes process, with recent papers
@@ -385,6 +433,6 @@ given $(x_t)_{t\geq1}$}$ \Comment{Update posterior on $\theta$}
\end{algorithm}
\bibliography{sparse}{}
-\bibliographystyle{alpha}
+\bibliographystyle{abbrv}
\end{document}