aboutsummaryrefslogtreecommitdiffstats
path: root/finale/sections/intro.tex
diff options
context:
space:
mode:
authorThibaut Horel <thibaut.horel@gmail.com>2015-12-11 20:07:55 -0500
committerThibaut Horel <thibaut.horel@gmail.com>2015-12-11 20:07:55 -0500
commitfb13f68c3e0901b0e876c602fee556566b7ed6ce (patch)
tree04792dae37f43e4d073a9e81c31065c5088b41f0 /finale/sections/intro.tex
parent4206217325019092305f1b17a6e06e331f4d99c4 (diff)
downloadcascades-fb13f68c3e0901b0e876c602fee556566b7ed6ce.tar.gz
Introduction
Diffstat (limited to 'finale/sections/intro.tex')
-rw-r--r--finale/sections/intro.tex34
1 files changed, 26 insertions, 8 deletions
diff --git a/finale/sections/intro.tex b/finale/sections/intro.tex
index f1f1859..0806c98 100644
--- a/finale/sections/intro.tex
+++ b/finale/sections/intro.tex
@@ -7,15 +7,33 @@ only observing the ``infection times'' of the nodes in the graph, one might
hope to recover the underlying graph and the parameters of the cascade model.
This problem is known in the literature as the \emph{Network Inference problem}.
+More precisely, the cascade models studied here will be discrete-time random
+processes and the observations are snapshots of the states of the nodes in the
+networks at each time step. The cascade model specifies the probabilities of
+transition between states as a function of the edge weights of the network.
+Recovering the edge weights of the network given observations of cascades then
+amounts to doing parametric inference of the cascade model.
+
+A recent line of works \cite{GomezRodriguez:2010,Netrapalli:2012,pouget} has
+focused on doing MLE estimation of the edge weights and obtaining guarantees on
+the convergence rate of the MLE estimator. In this work we depart from this
+line of work by studying the Graph Inference in the Bayesian setting.
+Specifically:
\begin{itemize}
- \item graph inference: what is the proble? what is an observation,
- contagion model
- \item prior work: sample complexity with MLE
- \item here: bayesian approach
- \begin{itemize}
- \item natural framework for active learning wwith significant
- speedup over passive
- \end{itemize}
+ \item we propose a Bayesian Inference formulation of the NIP problem in the
+ Generalized Linear Cascade (GLC) Model of \cite{pouget} and show how to apply
+ MCMC and variationel inference to it.
+ \item we show how to leverage this Bayesian formulation to design active
+ learning heuristics where the experimenter is able to dynamically
+ choose the source node at which the observe cascades originate.
+ \item we show empirically that active learning greatly improves the speed
+ of learning compared to i.i.d observations.
\end{itemize}
+The organization of the paper is as follows: we conclude this introduction by
+a review of the related works. Section 2 introduces the notations and the
+Generalized Linear Model, Section 3 presents our Bayesian Inference
+formulation. The active learning approach is described in Section 4. Section
+5 gives our experimental results. Finally we conclude by a discussion in
+ Section 6.
\input{sections/related.tex}