aboutsummaryrefslogtreecommitdiffstats
path: root/finale/sections/bayesian.tex
diff options
context:
space:
mode:
authorjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-12-11 13:59:56 -0500
committerjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-12-11 13:59:56 -0500
commitea8ab1d0efa17826ba2d151e09026950fd1cd738 (patch)
tree6936d31cc97e87d29add54aabcdc1e9f826f849f /finale/sections/bayesian.tex
parented72283f6d91ca0f86ae96edd4c37fde87d14f47 (diff)
downloadcascades-ea8ab1d0efa17826ba2d151e09026950fd1cd738.tar.gz
VI gaussian stuff one formula, going for lunch
Diffstat (limited to 'finale/sections/bayesian.tex')
-rw-r--r--finale/sections/bayesian.tex24
1 files changed, 14 insertions, 10 deletions
diff --git a/finale/sections/bayesian.tex b/finale/sections/bayesian.tex
index efc1526..1426c97 100644
--- a/finale/sections/bayesian.tex
+++ b/finale/sections/bayesian.tex
@@ -104,17 +104,21 @@ where $\mathcal{N}^+(\cdot)$ is a gaussian truncated to lied on $\mathbb{R}^+$
since $\Theta$ is a transformed parameter $z \mapsto -\log(1 - z)$. This model
is represented in the graphical model of Figure~\ref{fig:graphical}.
-The product-form of the prior implies that the KL term is entirely decomposable:
+The product-form of the prior implies that the KL term is entirely decomposable.
+Since an easy closed-form formula exists for the KL divergence between two
+gaussians, we approximate the truncated gaussians by their non-truncated
+counterpart.
\begin{equation}
- \text{KL}(q_{\mathbf{\Theta'}}, p_{\mathbf{\Theta}}) = \sum_{ij}
+ \label{eq:kl}
+ \begin{split}
+ \text{KL}(q_{\mathbf{\Theta'}}, p_{\mathbf{\Theta}}) &= \sum_{ij}
KL\left(\mathcal{N}^+(\mu_{ij}, \sigma_{ij}), \mathcal{N}^+(\mu^0_{ij},
- \sigma^0_{ij})\right)
+ \sigma^0_{ij})\right) \\
+ &\approx \sum_{ij} \log \frac{\sigma^0_{ij}}{\sigma_{ij}} +
+ \frac{\sigma^2_{ij} + {(\mu_{ij} - \mu_{ij}^0)}^2}{2{(\sigma^0_{ij})}^2}
+ \end{split}
\end{equation}
-ince an easy closed-form formula exists for the KL divergence between two
-gaussians, we approximate the truncated gaussians by their non-truncated
-counterpart. \begin{equation}
- \label{eq:kl}
- \text{KL}(q_{\mathbf{\Theta'}}, p_{\mathbf{\Theta}}) \approx \sum_{i,j} \log
- \frac{\sigma^0_{ij}}{\sigma_{ij}} +
-\end{equation}
+Reparametrization trick
+Batches
+Algorithm