diff options
| author | Stratis Ioannidis <stratis@stratis-Latitude-E6320.(none)> | 2012-11-04 19:59:20 -0800 |
|---|---|---|
| committer | Stratis Ioannidis <stratis@stratis-Latitude-E6320.(none)> | 2012-11-04 19:59:20 -0800 |
| commit | 35ff12aed97bcae04e89853fefa7443a03875bec (patch) | |
| tree | 3ea4dded79000d1b32cbb6eb42aaddf9c1102206 /problem.tex | |
| parent | c8865535a14a79581d3b9eb7c52cb853a831e180 (diff) | |
| download | recommendation-35ff12aed97bcae04e89853fefa7443a03875bec.tar.gz | |
small stuff
Diffstat (limited to 'problem.tex')
| -rw-r--r-- | problem.tex | 22 |
1 files changed, 10 insertions, 12 deletions
diff --git a/problem.tex b/problem.tex index e3ee84b..2a40185 100644 --- a/problem.tex +++ b/problem.tex @@ -6,7 +6,7 @@ More precisely, putting cost considerations aside, suppose that an experimenter \begin{align} y_i = \T{\beta} x_i + \varepsilon_i,\quad\forall i\in\mathcal{N},\label{model} \end{align} -where $\beta$ a vector in $\reals^d$, commonly referred to as the \emph{model}, and $\varepsilon_i$ (the \emph{measurement noise}) are independent, normally distributed random variables with zero mean and variance $\sigma^2$. +where $\beta$ a vector in $\reals^d$, commonly referred to as the \emph{model}, and $\varepsilon_i$ (the \emph{measurement noise}) are independent, normally distributed random variables with mean 0 and variance $\sigma^2$. The purpose of these experiments is to allow the experimenter to estimate the model $\beta$. In particular, under \eqref{model}, the maximum likelihood estimator of $\beta$ is the \emph{least squares} estimator: for $X_S=[x_i]_{i\in S}\in \reals^{|S|\times d}$ the matrix of experiment features and $y_S=[y_i]_{i\in S}\in\reals^{|S|}$ the observed measurements, @@ -47,12 +47,12 @@ A \emph{budget feasible reverse auction} \cite{singer-mechanisms} comprises a set of items $\mathcal{N}=\{1,\ldots,n\}$ as well as a single buyer. Each item has a cost $c_i\in\reals_+$. Moreover, the buyer has a positive value function $V:2^{\mathcal{N}}\to \reals_+$, as well as a budget $B\in \reals_+$. In the \emph{full information case}, the costs $c_i$ are common knowledge; the objective of the buyer in this context is to select a set $S$ maximizing the value $V(S)$ subject to the constraint $\sum_{i\in S} c_i\leq -B$.We write: +B$. We write: \begin{equation}\label{eq:non-strategic} OPT(V,\mathcal{N}, B) = \max_{S\subseteq\mathcal{N}} \left\{V(S) \mid \sum_{i\in S}c_i\leq B\right\} \end{equation} -for the optimal value achievable in the full-information case. \stratis{Should be $OPT(V,c,B)$\ldots better drop the arguments here and introduce them wherever necessary.} +for the optimal value achievable in the full-information case. %\stratis{Should be $OPT(V,c,B)$\ldots better drop the arguments here and introduce them wherever necessary.} In the \emph{strategic case}, each items in $\mathcal{N}$ is held by a different strategic agent, whose cost is a-priori private. A \emph{mechanism} $\mathcal{M} = (f,p)$ comprises (a) an \emph{allocation function} @@ -74,9 +74,9 @@ A mechanism is truthful iff every $i \in \mathcal{N}$ and every two cost vector $p_i(c_i,c_{-i}) - s_i(c_i,c_{-i})\cdot c_i \geq p_i(c_i',c_{-i}) - s(c_i',c_{-i})\cdot c_i$. \item \emph{Budget Feasibility.} The sum of the payments should not exceed the budget constraint: - \begin{displaymath} - \sum_{i\in\mathcal{N}} p_i \leq B. - \end{displaymath} + %\begin{displaymath} + $ \sum_{i\in\mathcal{N}} p_i \leq B.$ + %\end{displaymath} \item \emph{Approximation ratio.} The value of the allocated set should not be too far from the optimum value of the full information case \eqref{eq:non-strategic}. Formally, there must exist some $\alpha\geq 1$ @@ -130,18 +130,16 @@ incentivize her participation in the study. Note that, in this setup, the featur %\subsection{D-Optimality Criterion} Ideally, motivated by the $D$-optimality criterion, we would like to design a mechanism that maximizes \eqref{dcrit} within a good approximation ratio. As \eqref{dcrit} may take arbitrarily small negative values, to define a meaningful approximation one would consider the (equivalent) maximization of $V(S) = \det\T{X_S}X_S$. %, for some strictly increasing, on-to function $f:\reals_+\to\reals_+$. -However, the following lower bound implies that such an optimization goal cannot be attained under the costraints of truthfulness, budget feasibility, and individional rationallity. - +However, the following lower bound implies that such an optimization goal cannot be attained under the costraints of truthfulness, budget feasibility, and individual rationallity. \begin{lemma} For any $M>1$, there is no $M$-approximate, truthful, budget feasible, individionally rational mechanism for a budget feasible reverse auction with value fuction $V(S) = \det{\T{X_S}X_S}$. \end{lemma} \begin{proof} \input{proof_of_lower_bound1} \end{proof} - This negative result motivates us to study a problem with a modified objective: \begin{center} -\textsc{ExperimentalDesign} (EDP) +\textsc{ExperimentalDesignProblem} (EDP) \begin{subequations} \begin{align} \text{Maximize}\quad V(S) &= \log\det(I_d+\T{X_S}X_S) \label{modified} \\ @@ -151,9 +149,9 @@ This negative result motivates us to study a problem with a modified objective: \end{center} where $I_d\in \reals^{d\times d}$ is the identity matrix. One possible interpretation of \eqref{modified} is that, prior to the auction, the experimenter has free access to $d$ experiments whose features form an orthonormal basis in $\reals^d$. However, \eqref{modified} can also be motivated in the context of \emph{Bayesian experimental design} \cite{chaloner1995bayesian}. In short, the objective \eqref{modified} arises naturally when the experimenter retrieves the model $\beta$ through \emph{ridge regression}, rather than the linear regression \eqref{leastsquares} over the observed data; we explore this connection in Section~\ref{sec:bed}. -Note that maximizing \eqref{modified} is equivalent to maximizing \eqref{dcrit} in the full-information case. In particular, $\det(\T{X_S}X_S)> \det(\T{X_{S'}}X_{S'})$ iff $\det(I_d+\T{X_S}X_S)>\det(I_d+\T{X_{S'}}X_{S'})$. In addition, \eqref{edp} (and the equivalent problem with objective \eqref{dcrit}) are NP-hard; to see this, note that \textsc{Knapsack} reduces to EDP with dimension $d=1$ by mapping the weight of each item $w_i$ to an experiment with $x_i=w_i$. Nevertheless, \eqref{modified} is submodular, monotone and satifies $V(\emptyset) = 0$, allowing us to use the extensive machinery for the optimization of submodular functions, as well as recent results in the context of budget feasible auctions \cite{chen,singer-mechanisms}. +Note that maximizing \eqref{modified} is equivalent to maximizing \eqref{dcrit} in the full-information case. In particular, $\det(\T{X_S}X_S)> \det(\T{X_{S'}}X_{S'})$ iff $\det(I_d+\T{X_S}X_S)>\det(I_d+\T{X_{S'}}X_{S'})$. In addition, \eqref{edp}---and the equivalent problem with objective \eqref{dcrit}---are NP-hard; to see this, note that \textsc{Knapsack} reduces to EDP with dimension $d=1$ by mapping the weight of each item $w_i$ to an experiment with $x_i=w_i$. Nevertheless, \eqref{modified} is submodular, monotone and satifies $V(\emptyset) = 0$, allowing us to use the extensive machinery for the optimization of submodular functions, as well as recent results in the context of budget feasible auctions \cite{chen,singer-mechanisms}. -\stratis{A potential problem is that all of the above properties hold also for, \emph{e.g.}, $V(S)=\log(1+\det(\T{X_S}X_S))$\ldots} +%\stratis{A potential problem is that all of the above properties hold also for, \emph{e.g.}, $V(S)=\log(1+\det(\T{X_S}X_S))$\ldots} \begin{comment} |
