aboutsummaryrefslogtreecommitdiffstats
path: root/finale
diff options
context:
space:
mode:
authorjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-12-11 10:16:13 -0500
committerjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-12-11 10:16:13 -0500
commitac2e046300289912e4eae9624aeb59f9e82980de (patch)
tree6d16fd83cd99ac4f3338d26af690cb5db73eb3d8 /finale
parent7140bf8f92ff2fa997cb563a6515a53238971e6a (diff)
downloadcascades-ac2e046300289912e4eae9624aeb59f9e82980de.tar.gz
changing graphical model + some wording
Diffstat (limited to 'finale')
-rw-r--r--finale/graphical.pdfbin168270 -> 168291 bytes
-rw-r--r--finale/sections/bayesian.tex25
2 files changed, 13 insertions, 12 deletions
diff --git a/finale/graphical.pdf b/finale/graphical.pdf
index fcdbf8b..22fb95a 100644
--- a/finale/graphical.pdf
+++ b/finale/graphical.pdf
Binary files differ
diff --git a/finale/sections/bayesian.tex b/finale/sections/bayesian.tex
index 1e4caf7..60272fe 100644
--- a/finale/sections/bayesian.tex
+++ b/finale/sections/bayesian.tex
@@ -3,9 +3,9 @@
\label{fig:graphical}
\includegraphics[scale=.8]{graphical.pdf}
\caption{Graphical model representation of the Network Inference Problem with
- edge weights $\theta_{ij}$, cascade indicator vectors $X^c_t$, edge prior
-parameters $\mu$ and $\sigma$. The source distribution, parameterized by $\phi$,
-is considered fixed here.}
+ edge weights $\theta_{ij}$, observed cascade indicator vectors $X^c_t$, edge
+prior parameters $\mu_{ij}$ and $\sigma_{ij}$. The source distribution,
+parameterized by $\phi$, is considered fixed here.}
\end{figure}
In this section, we develop a Bayesian approach to the Network Inference Problem
@@ -33,9 +33,9 @@ density of triangles has the potential to greatly increase the information we
leverage from each cascade. Of course, such priors no longer allow us to
perform inference in parallel, which was leveraged in prior work.
-A systematic study of non-product priors is left for future work. We focus on
-product priors in the case of the IC model presented in Section~\ref{sec:model},
-which has no conjugate priors:
+\paragraph{The IC model.}
+As mentioned above, the IC model (cf. Section~\ref{sec:model}) has no conjugate
+priors. We consider here a truncated product gaussian prior here:
\begin{equation}
\label{eq:gaussianprior}
\text{prior}(\Theta) = \prod_{ij} \mathcal{N}^+(\theta_{ij} | \mu_{ij},
@@ -43,12 +43,11 @@ which has no conjugate priors:
\end{equation}
where $\mathcal{N}^+(\cdot)$ is a gaussian truncated to lied on $\mathbb{R}^+$
since $\Theta$ is a transformed parameter $z \mapsto -\log(1 - z)$. This model
-is represented in the graphical model of Figure~\ref{fig:graphical}
+is represented in the graphical model of Figure~\ref{fig:graphical}.
-Since the IC model likelihood has no conjugate family, the prior in
-Eq.~\ref{eq:gaussianprior} is also non-conjuate. We will resort to sampling
-algorithms (MCMC) and approximate Bayesian methods (variational inference),
-which we cover here.
+Since the prior in Eq.~\ref{eq:gaussianprior} is non-conjuate, we will
+resort to the use of sampling algorithms (MCMC) and approximate Bayesian methods
+(variational inference), which we cover here.
\paragraph{MCMC}
The Metropolis-Hastings (MCMC) algorithm allows us to draw samples from the
@@ -61,4 +60,6 @@ distribution using a variational inference algorithm.
\paragraph{Variational Inference}
-\paragraph{Bohning bounds}
+Variational inference algorithms consist in fitting an approximate family of
+distributions to the exact posterior.
+