From 55809604c10dcd4ace71079ed8d72a1ca1a5a9eb Mon Sep 17 00:00:00 2001 From: Stratis Ioannidis Date: Sat, 3 Nov 2012 23:30:03 -0700 Subject: intro --- general.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'general.tex') diff --git a/general.tex b/general.tex index 12e01a5..589b176 100644 --- a/general.tex +++ b/general.tex @@ -10,7 +10,7 @@ The experimenter estimates $\beta$ through \emph{maximum a posteriori estimation This optimization, commonly known as \emph{ridge regression}, includes an additional penalty term compared to the least squares estimation \eqref{leastsquares}. -Let $\entropy(\beta)$ be the entropy of $\beta$ under this distribution, and $\entropy(\beta\mid y_S)$ the entropy of $\beta$ conditioned on the experiment outcomes $Y_S$, for some $S\subseteq \mathcal{N}$. In this setting, a natural objective to select a set of experiments $S$ that maximizes her \emph{information gain}: +Let $\entropy(\beta)$ be the entropy of $\beta$ under this distribution, and $\entropy(\beta\mid y_S)$ the entropy of $\beta$ conditioned on the experiment outcomes $Y_S$, for some $S\subseteq \mathcal{N}$. In this setting, a natural objective, originally proposed by Lindley \cite{lindley1956measure}, is to select a set of experiments $S$ that maximizes her \emph{information gain}: $$ I(\beta;y_S) = \entropy(\beta)-\entropy(\beta\mid y_S). $$ Assuming normal noise variables, the information gain is equal (up to a constant) to the following value function \cite{chaloner1995bayesian}: -- cgit v1.2.3-70-g09d2