summaryrefslogtreecommitdiffstats
path: root/general.tex
diff options
context:
space:
mode:
authorStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2012-11-05 11:55:19 -0800
committerStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2012-11-05 11:55:19 -0800
commitb3d93b68f5d032a4b438400edd26c3282c2ae5d1 (patch)
treefd99ed5b6a369ab8ec702565b112b2ea8be83cba /general.tex
parent07e4a9dc1a5ff20996dc67adf01fa35cc233cfbe (diff)
downloadrecommendation-b3d93b68f5d032a4b438400edd26c3282c2ae5d1.tar.gz
muthu section 5
Diffstat (limited to 'general.tex')
-rw-r--r--general.tex36
1 files changed, 36 insertions, 0 deletions
diff --git a/general.tex b/general.tex
index 3815c83..c11cb99 100644
--- a/general.tex
+++ b/general.tex
@@ -56,6 +56,42 @@ Theorem~\ref{thm:main}:
where $\mu$ is the smallest eigenvalue of $R$.
\end{theorem}
+\subsection{$D$-Optimality and Beyond}
+
+Ideally, motivated by the $D$-optimality criterion, we would like to design a mechanism that maximizes \eqref{dcrit} within a good approximation ratio. In what follows, we consider a slightly more general objective as follows:
+\begin{center}
+\textsc{ExperimentalDesignProblem} (EDP)
+\begin{subequations}
+\begin{align}
+\text{Maximize}\quad V(S) &= \log\det(I_d+\T{X_S}X_S) \label{modified} \\
+\text{subject to}\quad \sum_{i\in S} c_i&\leq B
+\end{align}\label{edp}
+\end{subequations}
+\end{center} where $I_d\in \reals^{d\times d}$ is the identity matrix.
+\junk{One possible interpretation of \eqref{modified} is that, prior to the auction, the experimenter has free access to $d$ experiments whose features form an orthonormal basis in $\reals^d$. However, \eqref{modified} can also be motivated in the context of \emph{Bayesian experimental design} \cite{chaloner1995bayesian}. In short, the objective \eqref{modified} arises naturally when the experimenter retrieves the model $\beta$ through \emph{ridge regression}, rather than the linear regression \eqref{leastsquares} over the observed data; we explore this connection in Section~\ref{sec:bed}.
+}
+
+We present our results with this version of the objective function because it is simple and captures the versions
+we will need later (including an arbitrary matrix in place of $I_d$ or a zero matrix which will correspond to ~\eqref{dcrit} ).
+
+%Note that maximizing \eqref{modified} is equivalent to maximizing \eqref{dcrit} in the full-information case. In particular, $\det(\T{X_S}X_S)> \det(\T{X_{S'}}X_{S'})$ iff $\det(I_d+\T{X_S}X_S)>\det(I_d+\T{X_{S'}}X_{S'})$. In addition,
+\EDP{} \emph{and} the corresponding problem with objective \eqref{dcrit} are NP-hard; to see this, note that \textsc{Knapsack} reduces to EDP with dimension $d=1$ by mapping the weight of each item $w_i$ to an experiment with $x_i=w_i$. Moreover, \eqref{modified} is submodular, monotone and satisfies $V(\emptyset) = 0$.
+%, allowing us to use the extensive machinery for the optimization of submodular functions, as well as recent results in the
+% context of budget feasible auctions \cite{chen,singer-mechanisms}.
+
+%\stratis{A potential problem is that all of the above properties hold also for, \emph{e.g.}, $V(S)=\log(1+\det(\T{X_S}X_S))$\ldots}
+
+As \eqref{dcrit} may take arbitrarily small negative values, to define a meaningful approximation one would consider the (equivalent) maximization of $V(S) = \det\T{X_S}X_S$. %, for some strictly increasing, on-to function $f:\reals_+\to\reals_+$.
+However, the following lower bound implies that such an optimization goal cannot be attained under the constraints of truthfulness, budget feasibility, and individual rationality.
+\begin{lemma}
+For any $M>1$, there is no $M$-approximate, truthful, budget feasible, individually rational mechanism for a budget feasible reverse auction with value function $V(S) = \det{\T{X_S}X_S}$.
+For any $M>1$, there is no $M$-approximate, truthful, budget feasible, individually rational mechanism for a budget feasible reverse auction with $V(S) = \det{\T{X_S}X_S}$.
+\end{lemma}
+\begin{proof}
+\input{proof_of_lower_bound1}
+\end{proof}
+
+
\subsection{Beyond Linear Models}
Selecting experiments that maximize the information gain in the Bayesian setup leads to a natural generalization to other learning examples beyond linear regression. In particular, in the more general PAC learning setup \cite{...}, the features $x_i$, $i\in \mathcal{N}$ take values in some general set $\Omega$, called the \emph{query space}, and measurements $y_i\in\reals$ are given by
\begin{equation}\label{eq:hypothesis-model}