summaryrefslogtreecommitdiffstats
path: root/problem.tex
diff options
context:
space:
mode:
Diffstat (limited to 'problem.tex')
-rw-r--r--problem.tex11
1 files changed, 10 insertions, 1 deletions
diff --git a/problem.tex b/problem.tex
index 3ad3270..9d3fb9f 100644
--- a/problem.tex
+++ b/problem.tex
@@ -55,6 +55,9 @@ Under the linear model \eqref{model}, and the Gaussian prior, the information ga
\begin{align}
V(S) &= \frac{1}{2}\log\det(R+ \T{X_S}X_S) \label{dcrit} %\\
\end{align}
+This value function is known in the experimental design literature as the
+$D$-optimality criterion
+\cite{pukelsheim2006optimal,atkinson2007optimum,chaloner1995bayesian}.
%which is indeed a function of the covariance matrix $(R+\T{X_S}X_S)^{-1}$.
%defined as $-\infty$ when $\mathrm{rank}(\T{X_S}X_S)<d$.
%As $\hat{\beta}$ is a multidimensional normal random variable, the
@@ -69,7 +72,13 @@ Under the linear model \eqref{model}, and the Gaussian prior, the information ga
%\end{align}
%There are several reasons
%In addition, the maximization of convex relaxations of this function is a well-studied problem \cite{boyd}.
-Our analysis will focus on the case of a \emph{homotropic} prior, in which the prior covariance is the identity matrix, \emph{i.e.}, $R=I_d\in \reals^{d\times d}.$ Intuitively, this corresponds to the simplest prior, in which no direction of $\reals^d$ is a priori favored; equivalently, it also corresponds to the case where ridge regression estimation \eqref{ridge} performed by $\E$ has a penalty term $\norm{\beta}_2^2$. In Section 5, we will address other $R$'s.
+Our analysis will focus on the case of a \emph{homotropic} prior, in which the
+prior covariance is the identity matrix, \emph{i.e.}, $R=I_d\in \reals^{d\times
+d}.$ Intuitively, this corresponds to the simplest prior, in which no direction
+of $\reals^d$ is a priori favored; equivalently, it also corresponds to the
+case where ridge regression estimation \eqref{ridge} performed by $\E$ has
+a penalty term $\norm{\beta}_2^2$. A generalization of our results to general
+matrices $R$ can be found in Section~\ref{sec:ext}.
%Note that \eqref{dcrit} is a submodular set function, \emph{i.e.},
%$V(S)+V(T)\geq V(S\cup T)+V(S\cap T)$ for all $S,T\subseteq \mathcal{N}$; it is also monotone, \emph{i.e.}, $V(S)\leq V(T)$ for all $S\subset T$.