diff options
| -rw-r--r-- | related.tex | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/related.tex b/related.tex index e2c3138..5c5357d 100644 --- a/related.tex +++ b/related.tex @@ -20,13 +20,13 @@ a truthful, $O(\log^3 n)$-approximate mechanism \subsection{Data Markets} - A series of recent papers \cite{mcsherrytalwar,approximatemechanismdesign,xiao:privacy-truthfulness,chen:privacy-truthfulness} consider the related problem of retrieving data from an \textit{unverified} database, where strategic users may misreport their data to a data analyst to ensure their privacy. \citeN{mcsherrytalwar} argue that \emph{differentially private} mechanisms offer a form of \emph{approximate truthfulness}: if users have a utility that depends on their privacy, reporting their data untruthfully can only increase their utility by a small amount. %\citeN{xiao:privacy-truthfulness}, improving upon earlier work by~\citeN{approximatemechanismdesign}, constructs mechanisms that simultaneously achieve exact truthfulness as well as differential privacy. -We depart by assuming that experiment outcomes are tamper-proof, cannot be manipulated. -A different set of papers \cite{ghosh-roth:privacy-auction,roth-liggett,pranav} consider a setting where data cannot be misreported, but the utility of users is a function of the differential privacy guarantee an analyst provides them. In contrast, any privacy costs in our setup are internalized in the costs $c_i$. %Eliciting private data through a \emph{survey} \cite{roth-liggett}, whereby individuals first decide whether to participate in the survey and then report their data, + A series of recent papers \cite{mcsherrytalwar,approximatemechanismdesign,xiao:privacy-truthfulness,chen:privacy-truthfulness} consider the related problem of retrieving data from an \textit{unverified} database, where strategic users may misreport their data to ensure their privacy. \citeN{mcsherrytalwar} argue that \emph{differentially private} mechanisms offer a form of \emph{approximate truthfulness}: if users have a utility that depends on their privacy, reporting their data untruthfully can only increase their utility by a small amount. %\citeN{xiao:privacy-truthfulness}, improving upon earlier work by~\citeN{approximatemechanismdesign}, constructs mechanisms that simultaneously achieve exact truthfulness as well as differential privacy. +We depart by assuming that experiment outcomes are tamper-proof and cannot be manipulated. +A different set of papers \cite{ghosh-roth:privacy-auction,roth-liggett,pranav} consider a setting where data cannot be misreported, but the utility of users is a function of the differential privacy guarantee an analyst provides them. We do not focus on privacy; any privacy costs in our setup are internalized in the costs $c_i$. %Eliciting private data through a \emph{survey} \cite{roth-liggett}, whereby individuals first decide whether to participate in the survey and then report their data, % also falls under the unverified database setting \cite{xiao:privacy-truthfulness}. In the \emph{verified} database setting, \citeN{ghosh-roth:privacy-auction} and~\citeN{pranav} consider budgeted auctions where users have a utility again captured by differential privacy. Our work departs from the above setups in that utilities do not involve privacy, whose effects are assumed to be internalized in the costs reported by the users; crucially, we also assume that experiments are tamper-proof, and individuals can misreport their costs but not their values. \sloppy -Our work is closest to the survey setup of~\citeN{roth-schoenebeck}, who also consider how to sample individuals with different features who report a hidden value at a certain cost. The authors assume a prior on the joint distribution between costs and features, and wish to obtain an unbiased estimate of the expectation of the hidden value under the constraints of truthfulness, budget feasibility and individual rationality. Our work departs by learning a more general statistic (a linear model) than data means. We note that, as in \cite{roth-schoenebeck}, costs and features can be arbitrarily correlated (our results are prior-free). +Our work is closest to the survey setup of~\citeN{roth-schoenebeck}, who also consider how to sample individuals with different features who report a hidden value at a certain cost. The authors assume a joint distribution between costs $c_i$ and features $x_i$, and wish to obtain an unbiased estimate of the expectation of the hidden value over the population, under the constraints of truthfulness, budget feasibility and individual rationality. Our work departs by learning a more general statistic (a linear model) rather than data means. We note that, as in \cite{roth-schoenebeck}, costs $c_i$ and features $x_i$ can be arbitrarily correlated in our work-the experimenter's objective \eqref{obj} does not depend on their joint distribution. \fussy |
