summaryrefslogtreecommitdiffstats
path: root/problem.tex
diff options
context:
space:
mode:
authorStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-06 18:01:06 -0700
committerStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-06 18:01:06 -0700
commit081cddcb61cc25216994c6328aa2d9cebfcc33cd (patch)
tree5597da8f8fbc24159eebfefec66f047071d7a5a6 /problem.tex
parent4a89391c9d18d8a2e1b3a88bc3bf108890e2cad1 (diff)
downloadrecommendation-081cddcb61cc25216994c6328aa2d9cebfcc33cd.tar.gz
minor
Diffstat (limited to 'problem.tex')
-rw-r--r--problem.tex9
1 files changed, 4 insertions, 5 deletions
diff --git a/problem.tex b/problem.tex
index f3ac078..851f691 100644
--- a/problem.tex
+++ b/problem.tex
@@ -54,9 +54,8 @@ Maximizing $I(\beta;y_S)$ is therefore equivalent to maximizing $\log\det(R+ \T{
$D$-optimality criterion
\cite{pukelsheim2006optimal,atkinson2007optimum,chaloner1995bayesian}.
-Note that the estimator $\hat{\beta}$ is a linear map of $y_S$; as $y_S$ is a multidimensional normal r.v., so is $\hat{\beta}$ (the randomness coming from the noise terms $\varepsilon_i$ and the prior on $\beta$).
-In particular, $\hat{\beta}$ has
-covariance $\sigma^2(R+\T{X_S}X_S)^{-1}$. As such, maximizing $I(\beta;y_S)$ can alternatively be seen as a means of reducing the uncertainty on estimator $\hat{\beta}$ by minimizing the product of the eigenvalues of its covariance.
+Note that the estimator $\hat{\beta}$ is a linear map of $y_S$. As $y_S$ is a multidimensional normal r.v., so is $\hat{\beta}$; in particular, $\hat{\beta}$ has
+covariance $\sigma^2(R+\T{X_S}X_S)^{-1}$. As such, maximizing $I(\beta;y_S)$ can alternatively be seen as a means of reducing the uncertainty on estimator $\hat{\beta}$ by minimizing the product of the eigenvalues of its covariance (as the latter equals the determinant).
%An alternative interpretation, given that $(R+ \T{X_S}X_S)^{-1}$ is the covariance of the estimator $\hat{\beta}$, is that it tries to minimize the
%which is indeed a function of the covariance matrix $(R+\T{X_S}X_S)^{-1}$.
@@ -148,8 +147,8 @@ $S:\reals_+^n \to 2^\mathcal{N}$, determining the set
$p:\reals_+^n\to \reals_+^n$, determining the payments $[p_i(c)]_{i\in \mathcal{N}}$ received by experiment subjects.
We seek mechanisms that are \emph{normalized} (unallocated experiments receive zero payments), \emph{individually rational} (payments for allocated experiments exceed costs), have \emph{no positive transfers} (payments are non-negative), and are \emph{budget feasible} (the sum of payments does not exceed the budget $B$).
-We relax the notion of truthfulness to \emph{$\delta$-truthfulness}, requiring that reporting one's true cost is an \emph{almost-dominant} strategy: no subject increases their utility by reporting a cost that differs more than $\delta$ from their true cost. Under this definition, a mechanism is truthful if $\delta=0$.
-In addition, we would like the allocation $S(c)$ to be of maximal value; however, $\delta$-truthfulness, as well as the hardness of \EDP{}, preclude achieving this goal. Hence, we seek mechanisms with that yield a \emph{constant approximation ratio}, \emph{i.e.}, there exists $\alpha\geq 1$ s.t.~$OPT \leq \alpha V(S(c))$, and are \emph{computationally efficient}, in that the allocation and payment functions can be computed in polynomial time.
+We relax the notion of truthfulness to \emph{$\delta$-truthfulness}, requiring that reporting one's true cost is an \emph{almost-dominant} strategy: no subject increases their utility by reporting a cost that differs more than $\delta>0$ from their true cost. Under this definition, a mechanism is truthful if $\delta=0$.
+In addition, we would like the allocation $S(c)$ to be of maximal value; however, $\delta$-truthfulness, as well as the hardness of \EDP{}, preclude achieving this goal. Hence, we seek mechanisms with that yield a \emph{constant approximation ratio}, \emph{i.e.}, there exists $\alpha\geq 1$ s.t.~$OPT \leq \alpha V(S(c))$, and are \emph{computationally efficient}, in that $S$ and $p$ can be computed in polynomial time.
We note that the $\max$ operator in the greedy algorithm \eqref{eq:max-algorithm} breaks
truthfulness. Though this is not true for all submodular functions (see, \emph{e.g.}, \cite{singer-mechanisms}), it is true for the objective of \EDP{}: we show this in Appendix~\ref{sec:non-monotonicity}. This motivates our study of more complex mechanisms.