summaryrefslogtreecommitdiffstats
path: root/general.tex
diff options
context:
space:
mode:
authorStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2012-11-03 23:30:03 -0700
committerStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2012-11-03 23:30:03 -0700
commit55809604c10dcd4ace71079ed8d72a1ca1a5a9eb (patch)
tree50eedab5ceefe4e536ae592eda26d6e6f132399d /general.tex
parent981f1d7c5a9f46274ab0d651a28334d39044c209 (diff)
downloadrecommendation-55809604c10dcd4ace71079ed8d72a1ca1a5a9eb.tar.gz
intro
Diffstat (limited to 'general.tex')
-rw-r--r--general.tex2
1 files changed, 1 insertions, 1 deletions
diff --git a/general.tex b/general.tex
index 12e01a5..589b176 100644
--- a/general.tex
+++ b/general.tex
@@ -10,7 +10,7 @@ The experimenter estimates $\beta$ through \emph{maximum a posteriori estimation
This optimization, commonly known as \emph{ridge regression}, includes an additional penalty term compared to the least squares estimation \eqref{leastsquares}.
-Let $\entropy(\beta)$ be the entropy of $\beta$ under this distribution, and $\entropy(\beta\mid y_S)$ the entropy of $\beta$ conditioned on the experiment outcomes $Y_S$, for some $S\subseteq \mathcal{N}$. In this setting, a natural objective to select a set of experiments $S$ that maximizes her \emph{information gain}:
+Let $\entropy(\beta)$ be the entropy of $\beta$ under this distribution, and $\entropy(\beta\mid y_S)$ the entropy of $\beta$ conditioned on the experiment outcomes $Y_S$, for some $S\subseteq \mathcal{N}$. In this setting, a natural objective, originally proposed by Lindley \cite{lindley1956measure}, is to select a set of experiments $S$ that maximizes her \emph{information gain}:
$$ I(\beta;y_S) = \entropy(\beta)-\entropy(\beta\mid y_S). $$
Assuming normal noise variables, the information gain is equal (up to a constant) to the following value function \cite{chaloner1995bayesian}: