summaryrefslogtreecommitdiffstats
path: root/intro.tex
diff options
context:
space:
mode:
Diffstat (limited to 'intro.tex')
-rw-r--r--intro.tex11
1 files changed, 4 insertions, 7 deletions
diff --git a/intro.tex b/intro.tex
index 22d7a69..07abf4b 100644
--- a/intro.tex
+++ b/intro.tex
@@ -1,7 +1,3 @@
-%The statistical analysis of user data is a widely spread practice among Internet companies, which routinely use machine learning techniques over vast records of user data to perform inference and classification tasks integral to their daily operations. Statistical analysis of personal data collected through surveys or experimentation is also the cornerstone of marketing research, as well as research in a variety of experimental sciences such as medicine and sociology.
-
-%This state of affairs has motivated several recent studies of \emph{data markets}, in which an analyst wishes to purchase data from a set of users \cite{...}. Eah user discloses her data to the analyst only if she receives an appropriate compensation. Assuming that the analyst has a limited budget, a natural question to ask is how she should allocate her budget across different users.
-
In the classic setting of experimental design \cite{pukelsheim2006optimal,atkinson2007optimum},
an {\em experimenter} \E\ has access to a population of $n$ potential experiment subjects.
Each subject $i\in \{1,\ldots,n\}$ is associated with a set of parameters (or features) $x_i\in \reals^d$,
@@ -20,8 +16,9 @@ We depart from this classical setup by viewing experimental design in a strategi
In our setting, experiments cannot be manipulated and hence measurements are reliable. %\footnote{Thus, the experiments of our interest are statistically significant ones where each experiment provides a reliable outcome.}
\E{} has a total budget of $B$ to conduct all the experiments.
There is a cost $c_i$ associated with experimenting on
-subject $i$ which varies from subject to subject. This cost $c_i$ is determined by the subject $i$: it may be viewed as the
-cost $i$ incurs when tested and for which she needs to be reimbursed; or, it might be viewed as the incentive for $i$ to participate in the experiment; or, it might be the intrinsic worth of the data to the subject. The economic aspect of paying subjects has always been inherent in experimental design: experimenters often work within strict budgets and design creative incentives. Subjects often negotiate better incentives or higher payments.
+subject $i$ which varies from subject to subject. This cost $c_i$ is determined by the subject $i$ and reported to \E; subjects are strategic and may misreport these costs. Intuitively, $c_i$ may be viewed as the
+cost $i$ incurs when tested and for which she needs to be reimbursed; or, it might be viewed as the incentive for $i$ to participate in the experiment; or, it might be the intrinsic worth of the data to the subject.
+ The economic aspect of paying subjects has always been inherent in experimental design: experimenters often work within strict budgets and design creative incentives. Subjects often negotiate better incentives or higher payments.
However, we are not aware of a principled study of this setting from a strategic point of view, when subjects declare their costs and therefore determine their payment. Such a setting is increasingly realistic, given the growth of these experiments over the Internet. % and associated data markets.
% When subjects are strategic, they may have an incentive to misreport their cost, leading to the need for a sophisticated choice of experiments and payments. Arguably, user incentiviation is of particular pertinence due to the extent of statistical analysis over user data on the Internet. %, which has led to the rise of several different research efforts in studying data markets \cite{...}.
@@ -41,7 +38,7 @@ subject to a budget constraint $\sum_{i\in S}c_i\leq B$, where $B$ is \E's budge
\smallskip
The objective function, which is the key, is formally obtained by optimizing the information gain in $\beta$ when the latter is learned through ridge regression, and is related to the so-called $D$-optimality criterion~\cite{pukelsheim2006optimal,atkinson2007optimum}.
\item
-We present a polynomial time, $\delta$-truthful mechanism for \SEDP{}, yielding a constant factor ($\approx 12.98$) approximation to the optimal value of \eqref{obj}. In contrast to this, we show that no truthful, budget-feasible mechanisms are possible for \SEDP{} within a factor 2 approximation.
+We present a polynomial time mechanism scheme for \SEDP{} that is approximately truthful and yields a constant factor approximation to the optimal value of \eqref{obj}. In particular, for any small $\delta>0$ and $\varepsilon>0$, we can construct a $(12.98\,,\varepsilon)$-approximate mechanism that is $\delta$-truthful and runs in polynomial time in both $n$ and $\log\log\frac{B}{\epsilon\delta}$. In contrast to this, we show that no truthful, budget-feasible mechanisms are possible for \SEDP{} within a factor 2 approximation.
\smallskip
We note that the objective \eqref{obj} is submodular. Using this fact, applying previous results on budget feasible mechanism design under general submodular objectives~\cite{singer-mechanisms,chen} would yield either a deterministic, truthful, constant-approximation mechanism that requires exponential time, or a non-deterministic, (universally) truthful, poly-time mechanism that yields a constant approximation ratio only \emph{in expectation} (\emph{i.e.}, its approximation guarantee for a given instance may in fact be unbounded).