summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--problem.tex4
1 files changed, 2 insertions, 2 deletions
diff --git a/problem.tex b/problem.tex
index 75529ff..db7108b 100644
--- a/problem.tex
+++ b/problem.tex
@@ -2,7 +2,7 @@
In the context of experimental design, an \emph{experiment} is a random variable $E$ sampled from a distribution $P_\beta$, where $\beta\in \Omega$ is an unknown parameter. An experimenter wishes to learn parameter $\beta$, and can chose among a set of possible different experiments, all of which have distributions parametrized by the same $\beta$.
The problem of optimal experimental design amounts to determining an experiment that maximizes the information revealed about parameter $\beta$.
-Though a variety of different measures of information exist in literature (see, \emph{e.g.}, \cite{ginebra}), the so-called \emph{value of information} \cite{lindley} is commonly used in traditional Bayesian experimental design \cite{lindley}. In particular, in the Bayesian setup, it is assumed that $\beta$ is sampled from a well-known prior distribution. The value of an experiment $E$ is then defined as the expected change in the entropy of $\beta$ (\emph{i.e.}, the mutual information between $E$ an $\beta$), given by
+Though a variety of different measures of information exist in literature (see, \emph{e.g.}, \cite{ginebra,chaloner}), the so-called \emph{value of information} \cite{lindley} is commonly used in traditional Bayesian experimental design. In particular, in the Bayesian setup, it is assumed that $\beta$ is sampled from a well-known prior distribution. The value of an experiment $E$ is then defined as the expected change in the entropy of $\beta$ (\emph{i.e.}, the mutual information between $E$ an $\beta$), given by
\begin{align}
\mutual(\beta; E) = \entropy(\beta) - \entropy(\beta \mid E).\label{voi}
\end{align}
@@ -27,7 +27,7 @@ Learning $\beta$ has many interesting applications, that make linear regression
In the Bayesian setting,
it is commonly assumed that $\beta$ follows a
multivariate normal distribution of mean zero and covariance matrix $\sigma_1^2
-I_d$. Under this prior and the linear model \eqref{model}, the value of information \eqref{voi} of an experiment $y_S$ is given by \cite{...}
+I_d$. Under this prior and the linear model \eqref{model}, the value of information \eqref{voi} of an experiment $Y_S$ is given by \cite{boyd,chaloner}
\begin{align}\label{vs}
V(S)
& \defeq I(\beta;y_S) = \frac{1}{2}\log\det\left(I_d