diff options
Diffstat (limited to 'general.tex')
| -rw-r--r-- | general.tex | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/general.tex b/general.tex index 12e01a5..589b176 100644 --- a/general.tex +++ b/general.tex @@ -10,7 +10,7 @@ The experimenter estimates $\beta$ through \emph{maximum a posteriori estimation This optimization, commonly known as \emph{ridge regression}, includes an additional penalty term compared to the least squares estimation \eqref{leastsquares}. -Let $\entropy(\beta)$ be the entropy of $\beta$ under this distribution, and $\entropy(\beta\mid y_S)$ the entropy of $\beta$ conditioned on the experiment outcomes $Y_S$, for some $S\subseteq \mathcal{N}$. In this setting, a natural objective to select a set of experiments $S$ that maximizes her \emph{information gain}: +Let $\entropy(\beta)$ be the entropy of $\beta$ under this distribution, and $\entropy(\beta\mid y_S)$ the entropy of $\beta$ conditioned on the experiment outcomes $Y_S$, for some $S\subseteq \mathcal{N}$. In this setting, a natural objective, originally proposed by Lindley \cite{lindley1956measure}, is to select a set of experiments $S$ that maximizes her \emph{information gain}: $$ I(\beta;y_S) = \entropy(\beta)-\entropy(\beta\mid y_S). $$ Assuming normal noise variables, the information gain is equal (up to a constant) to the following value function \cite{chaloner1995bayesian}: |
