summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorThibaut Horel <thibaut.horel@gmail.com>2013-02-11 10:39:39 -0800
committerThibaut Horel <thibaut.horel@gmail.com>2013-02-11 10:39:39 -0800
commite73a4651379d2c34855f1bc6fe5c0abef039b1d5 (patch)
treeacbb0e1005837772c274bd8c1efe698781337f63
parentf8d7d5fd0fbe81a26ddf727e547a9cea2f7216e6 (diff)
downloadrecommendation-e73a4651379d2c34855f1bc6fe5c0abef039b1d5.tar.gz
Mending section 3 and 5
-rw-r--r--general.tex33
-rw-r--r--problem.tex11
2 files changed, 32 insertions, 12 deletions
diff --git a/general.tex b/general.tex
index 3c7eddf..429299a 100644
--- a/general.tex
+++ b/general.tex
@@ -1,16 +1,16 @@
-\subsection{Strategic Experimental Design with non-homotropic prior}\label{sec:bed}
+\subsection{Strategic Experimental Design with Non-Homotropic Prior}\label{sec:bed}
%In this section, we extend our results to Bayesian experimental design
%\cite{chaloner1995bayesian}. We show that objective function \eqref{modified}
%has a natural interpretation in this context, further motivating its selection
%as our objective. Moreover,
-If the general case where the prior distribution of the experimenter on the
+In the general case where the prior distribution of the experimenter on the
model $\beta$ in \eqref{model} is not homotropic and has a generic covariance
matrix $R$, the value function takes the general form given by
\eqref{dcrit}.
-Applying the mechanism described in algorithm~\ref{mechanism} and adapting the
+Applying the mechanism described in Algorithm~\ref{mechanism} and adapting the
analysis of the approximation ratio, we get the following result which extends
Theorem~\ref{thm:main}:
@@ -26,20 +26,31 @@ Theorem~\ref{thm:main}:
where $\mu$ is the smallest eigenvalue of $R$.
\end{theorem}
-\subsection{Other Experimental Design Criteria}
+\subsection{Non-Bayesian Setting}
-A value function which is frequently used in experimental design is the
-$D$-optimality criterion obtained by replacing $R$ by the zero matrix in
-\eqref{dcrit}:
+In the non-bayesian setting, \emph{i.e.} when the experimenter has no prior
+distribution on the model, the covariance matrix $R$ is the zero matrix and
+ridge regression \eqref{ridge} reduces to simple least squares. In this case,
+the $D$-optimal criterion takes the following form:
\begin{equation}\label{eq:d-optimal}
V(S) = \log\det(X_S^TX_S)
\end{equation}
-Since \eqref{eq:d-optimal} may take arbitrarily small negative values, to define a meaningful approximation one would consider the (equivalent) maximization of $V(S) = \det\T{X_S}X_S$. %, for some strictly increasing, on-to function $f:\reals_+\to\reals_+$.
-However, the following lower bound implies that such an optimization goal cannot be attained under the constraints of truthfulness, budget feasibility, and individual rationality.
+A natural question which arises is whether it is possible to design
+a deterministic mechanism in this setting. Since \eqref{eq:d-optimal} may take
+arbitrarily small negative values, to define a meaningful approximation one
+would consider the (equivalent) maximization of $V(S) = \det\T{X_S}X_S$.
+However, the following lower bound implies that such an optimization goal
+cannot be attained under the constraints of truthfulness, budget feasibility,
+and individual rationality.
+
\begin{lemma}
-For any $M>1$, there is no $M$-approximate, truthful, budget feasible, individually rational mechanism for a budget feasible reverse auction with value function $V(S) = \det{\T{X_S}X_S}$.
-For any $M>1$, there is no $M$-approximate, truthful, budget feasible, individually rational mechanism for a budget feasible reverse auction with $V(S) = \det{\T{X_S}X_S}$.
+For any $M>1$, there is no $M$-approximate, truthful, budget feasible,
+individually rational mechanism for a budget feasible reverse auction with
+value function $V(S) = \det{\T{X_S}X_S}$. For any $M>1$, there is no
+$M$-approximate, truthful, budget feasible, individually rational mechanism for
+a budget feasible reverse auction with $V(S) = \det{\T{X_S}X_S}$.
\end{lemma}
+
\begin{proof}
\input{proof_of_lower_bound1}
\end{proof}
diff --git a/problem.tex b/problem.tex
index 3ad3270..9d3fb9f 100644
--- a/problem.tex
+++ b/problem.tex
@@ -55,6 +55,9 @@ Under the linear model \eqref{model}, and the Gaussian prior, the information ga
\begin{align}
V(S) &= \frac{1}{2}\log\det(R+ \T{X_S}X_S) \label{dcrit} %\\
\end{align}
+This value function is known in the experimental design literature as the
+$D$-optimality criterion
+\cite{pukelsheim2006optimal,atkinson2007optimum,chaloner1995bayesian}.
%which is indeed a function of the covariance matrix $(R+\T{X_S}X_S)^{-1}$.
%defined as $-\infty$ when $\mathrm{rank}(\T{X_S}X_S)<d$.
%As $\hat{\beta}$ is a multidimensional normal random variable, the
@@ -69,7 +72,13 @@ Under the linear model \eqref{model}, and the Gaussian prior, the information ga
%\end{align}
%There are several reasons
%In addition, the maximization of convex relaxations of this function is a well-studied problem \cite{boyd}.
-Our analysis will focus on the case of a \emph{homotropic} prior, in which the prior covariance is the identity matrix, \emph{i.e.}, $R=I_d\in \reals^{d\times d}.$ Intuitively, this corresponds to the simplest prior, in which no direction of $\reals^d$ is a priori favored; equivalently, it also corresponds to the case where ridge regression estimation \eqref{ridge} performed by $\E$ has a penalty term $\norm{\beta}_2^2$. In Section 5, we will address other $R$'s.
+Our analysis will focus on the case of a \emph{homotropic} prior, in which the
+prior covariance is the identity matrix, \emph{i.e.}, $R=I_d\in \reals^{d\times
+d}.$ Intuitively, this corresponds to the simplest prior, in which no direction
+of $\reals^d$ is a priori favored; equivalently, it also corresponds to the
+case where ridge regression estimation \eqref{ridge} performed by $\E$ has
+a penalty term $\norm{\beta}_2^2$. A generalization of our results to general
+matrices $R$ can be found in Section~\ref{sec:ext}.
%Note that \eqref{dcrit} is a submodular set function, \emph{i.e.},
%$V(S)+V(T)\geq V(S\cup T)+V(S\cap T)$ for all $S,T\subseteq \mathcal{N}$; it is also monotone, \emph{i.e.}, $V(S)\leq V(T)$ for all $S\subset T$.