summaryrefslogtreecommitdiffstats
path: root/problem.tex
diff options
context:
space:
mode:
authorStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2012-10-30 08:26:46 -0700
committerStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2012-10-30 08:26:46 -0700
commitdacff6f8d498ef281066742305db90d1121d7f3b (patch)
treebd5b44e093ca9117bb336d2ffed6a4b06faf250f /problem.tex
parent0d80fbea985c73831e9e20a97e259adf864f41be (diff)
downloadrecommendation-dacff6f8d498ef281066742305db90d1121d7f3b.tar.gz
?
Diffstat (limited to 'problem.tex')
-rw-r--r--problem.tex4
1 files changed, 2 insertions, 2 deletions
diff --git a/problem.tex b/problem.tex
index 3faef3b..c3fd38f 100644
--- a/problem.tex
+++ b/problem.tex
@@ -2,7 +2,7 @@
In the context of experimental design, an \emph{experiment} is a random variable $E$ sampled from a distribution $P_\beta$, where $\beta\in \Omega$ is an unknown parameter. An experimenter wishes to learn parameter $\beta$, and can chose among a set of possible different experiments, all of which have distributions parametrized by the same $\beta$.
The problem of optimal experimental design amounts to determining an experiment that maximizes the information revealed about parameter $\beta$.
-Though a variety of different measures of information exist in literature (see, \emph{e.g.}, \cite{ginebra}), the so-called \emph{value of information} \cite{lindley} is commonly used in traditional Bayesian experimental design \cite{lindley}. In particular, in the Bayesian setup, it is assumed that $\beta$ is sampled from a well-known prior distribution. The value of an experiment $E$ is then defined as the expected change in the entropy of $\beta$ (\emph{i.e.}, the mutual information between $E$ an $\beta$), given by
+Though a variety of different measures of information exist in literature (see, \emph{e.g.}, \cite{ginebra,chaloner}), the so-called \emph{value of information} \cite{lindley} is commonly used in traditional Bayesian experimental design. In particular, in the Bayesian setup, it is assumed that $\beta$ is sampled from a well-known prior distribution. The value of an experiment $E$ is then defined as the expected change in the entropy of $\beta$ (\emph{i.e.}, the mutual information between $E$ an $\beta$), given by
\begin{align}
\mutual(\beta; E) = \entropy(\beta) - \entropy(\beta \mid E).\label{voi}
\end{align}
@@ -27,7 +27,7 @@ Learning $\beta$ has many interesting applications, that make linear regression
In the Bayesian setting,
it is commonly assumed that $\beta$ follows a
multivariate normal distribution of mean zero and covariance matrix $\sigma_1^2
-I_d$. Under this prior and the linear model \eqref{model}, the value of information \eqref{voi} of an experiment $Y_S$ is given by \cite{...}
+I_d$. Under this prior and the linear model \eqref{model}, the value of information \eqref{voi} of an experiment $Y_S$ is given by \cite{boyd,chaloner}
\begin{align}\label{vs}
V(S)
& \defeq I(\beta;y_S) = \frac{1}{2}\log\det\left(I_d