aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--finale/sections/related.tex55
-rw-r--r--finale/sparse.bib8
2 files changed, 26 insertions, 37 deletions
diff --git a/finale/sections/related.tex b/finale/sections/related.tex
index 59fb759..5f0289a 100644
--- a/finale/sections/related.tex
+++ b/finale/sections/related.tex
@@ -1,41 +1,22 @@
\paragraph{Related works.}
The study of edge prediction in graphs has been an active field of research for
-over a decade~\cite{Nowell08, Leskovec07, AdarA05}.~\cite{GomezRodriguez:2010}
-introduced the {\scshape Netinf} algorithm, which approximates the likelihood of
-cascades represented as a continuous process. The algorithm, relying on
-Maximum-Likelihood and convex relaxations was improved in later
-work~\cite{gomezbalduzzi:2011}, but is not known to have any theoretical
-guarantees beside empirical validation on synthetic networks.
+over a decade~\cite{Nowell08, Leskovec07, AdarA05}. MLE estimation (regularized
+and un-regularized) for Graph Inference has been studied both for discrete-time
+models \cite{Netrapalli:2012, pouget} and continous-time models
+\cite{GomezRodriguez:2010, gomezbalduzzi:2011, Abrahao:13}
-Let us denote by $m$ the number of nodes in the graph and for a given node, we
-denote by $s$ its in-degree. In what follows, we call a \emph{recovery
-guarantee} an upper bound on the number of observations required (expressed as
-a function of $s$ and $m$) to learn the incoming edges of a node to a fixed
-arbitrary accuracy level. \cite{Netrapalli:2012} studied the discrete-time
-version of the independent cascade model and obtained the first ${\cal O}(s^2
-\log m)$ recovery guarantee on general networks, by using unregularized MLE and
-making a {\it correlation decay\/} assumption, which limits the number of new
-infections at every step. They also suggest a {\scshape Greedy} algorithm,
-with provably good performance in tree graphs, which maintains a counter of
-each node's antecedents, i.e. the set of nodes infected at a prior time step to
-its infection time. The work of~\cite{Abrahao:13} studies the same
-continuous-model framework as \cite{GomezRodriguez:2010} and obtains an ${\cal
-O}(s^9 \log^2 s \log m)$ support recovery algorithm, without the
-\emph{correlation decay} assumption. \cite{du2013uncover,Daneshmand:2014}
-propose Lasso regularization of MLE for recovering the weights of the graph
-under a continuous-time independent cascade model. The work
-of~\cite{du2014influence} is slightly orthogonal to ours since they suggest
-learning the \emph{influence} function, rather than the parameters of the
-network directly.
-
-More recently, the continuous process studied in previous work has been
-reformulated as a Hawkes process, with recent papers
+More recently, the continuous-time processes studied in previous work have been
+reformulated as a Hawkes processes, with recent papers
\cite{linderman2014discovering, linderman2015scalable, simma2012modeling},
-focusing on Expectation-Maximization, and Variational Inference methods to learn
-these processes. Because of the discrete nature of the GLC model, we hope to
-bring a better understanding to the link between the properties of a graph and
-its \emph{learnability}. We wish to further explore the idea of non-product
-priors raised in \cite{linderman2014discovering, linderman2015scalable}, since
-the experimental validation of their work focused on simple graph priors.
-Finally, the \emph{Active Learning} formulation is, to the best of the
-authors' knowledge, novel in this context.
+focusing on Expectation-Maximization, Gibbs sampling and Variational Inference
+methods. In comparison the discrete-time nature of the GLC model allows us to
+scale the inference methods to larger graphs. Furthermore, while the works on
+Hawkes processes exclusively used factorized graph priors, we hope that
+Bayesian Inference for the GLC model will be able to accommodate more
+expressive graph priors more easily. This is a direction we wish to explore in
+future works.
+
+The Active Learning formulation is, to the best of the authors' knowledge,
+novel in this context. The Active Learning approach of \cite{shababo} share
+some similarities with ours even though their model is not, strictly speaking,
+a cascade model (in particular, the time steps are independent).
diff --git a/finale/sparse.bib b/finale/sparse.bib
index 34e2a84..11e7c27 100644
--- a/finale/sparse.bib
+++ b/finale/sparse.bib
@@ -1,3 +1,11 @@
+@inproceedings{shababo,
+ title={Bayesian inference and online experimental design for mapping neural microcircuits},
+ author={Shababo, Ben and Paige, Brooks and Pakman, Ari and Paninski, Liam},
+ booktitle={Advances in Neural Information Processing Systems},
+ pages={1304--1312},
+ year={2013}
+}
+
@article{pymc,
title={PyMC: Bayesian stochastic modelling in Python},
author={Patil, Anand and Huard, David and Fonnesbeck, Christopher J},