diff options
Diffstat (limited to 'general.tex')
| -rw-r--r-- | general.tex | 36 |
1 files changed, 36 insertions, 0 deletions
diff --git a/general.tex b/general.tex index 49adab2..957521a 100644 --- a/general.tex +++ b/general.tex @@ -58,6 +58,42 @@ Theorem~\ref{thm:main}: where $\mu$ is the smallest eigenvalue of $R$. \end{theorem} +\subsection{$D$-Optimality and Beyond} + +Ideally, motivated by the $D$-optimality criterion, we would like to design a mechanism that maximizes \eqref{dcrit} within a good approximation ratio. In what follows, we consider a slightly more general objective as follows: +\begin{center} +\textsc{ExperimentalDesignProblem} (EDP) +\begin{subequations} +\begin{align} +\text{Maximize}\quad V(S) &= \log\det(I_d+\T{X_S}X_S) \label{modified} \\ +\text{subject to}\quad \sum_{i\in S} c_i&\leq B +\end{align}\label{edp} +\end{subequations} +\end{center} where $I_d\in \reals^{d\times d}$ is the identity matrix. +\junk{One possible interpretation of \eqref{modified} is that, prior to the auction, the experimenter has free access to $d$ experiments whose features form an orthonormal basis in $\reals^d$. However, \eqref{modified} can also be motivated in the context of \emph{Bayesian experimental design} \cite{chaloner1995bayesian}. In short, the objective \eqref{modified} arises naturally when the experimenter retrieves the model $\beta$ through \emph{ridge regression}, rather than the linear regression \eqref{leastsquares} over the observed data; we explore this connection in Section~\ref{sec:bed}. +} + +We present our results with this version of the objective function because it is simple and captures the versions +we will need later (including an arbitrary matrix in place of $I_d$ or a zero matrix which will correspond to ~\eqref{dcrit} ). + +%Note that maximizing \eqref{modified} is equivalent to maximizing \eqref{dcrit} in the full-information case. In particular, $\det(\T{X_S}X_S)> \det(\T{X_{S'}}X_{S'})$ iff $\det(I_d+\T{X_S}X_S)>\det(I_d+\T{X_{S'}}X_{S'})$. In addition, +\EDP{} \emph{and} the corresponding problem with objective \eqref{dcrit} are NP-hard; to see this, note that \textsc{Knapsack} reduces to EDP with dimension $d=1$ by mapping the weight of each item $w_i$ to an experiment with $x_i=w_i$. Moreover, \eqref{modified} is submodular, monotone and satisfies $V(\emptyset) = 0$. +%, allowing us to use the extensive machinery for the optimization of submodular functions, as well as recent results in the +% context of budget feasible auctions \cite{chen,singer-mechanisms}. + +%\stratis{A potential problem is that all of the above properties hold also for, \emph{e.g.}, $V(S)=\log(1+\det(\T{X_S}X_S))$\ldots} + +As \eqref{dcrit} may take arbitrarily small negative values, to define a meaningful approximation one would consider the (equivalent) maximization of $V(S) = \det\T{X_S}X_S$. %, for some strictly increasing, on-to function $f:\reals_+\to\reals_+$. +However, the following lower bound implies that such an optimization goal cannot be attained under the constraints of truthfulness, budget feasibility, and individual rationality. +\begin{lemma} +For any $M>1$, there is no $M$-approximate, truthful, budget feasible, individually rational mechanism for a budget feasible reverse auction with value function $V(S) = \det{\T{X_S}X_S}$. +For any $M>1$, there is no $M$-approximate, truthful, budget feasible, individually rational mechanism for a budget feasible reverse auction with $V(S) = \det{\T{X_S}X_S}$. +\end{lemma} +\begin{proof} +\input{proof_of_lower_bound1} +\end{proof} + + \subsection{Beyond Linear Models} Selecting experiments that maximize the information gain in the Bayesian setup leads to a natural generalization to other learning examples beyond linear |
