summaryrefslogtreecommitdiffstats
path: root/related.tex
diff options
context:
space:
mode:
authorStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-02-11 09:37:09 -0800
committerStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-02-11 09:37:09 -0800
commit8c09cfd7da709aab03fb004b58ecd8e1eb4fb553 (patch)
treefe24e8514094cfcd172fce175bf6df60d0031d9a /related.tex
parent114a6b8eac3e6addebe84b831c5eafbec7bc9ef4 (diff)
downloadrecommendation-8c09cfd7da709aab03fb004b58ecd8e1eb4fb553.tar.gz
muthu
Diffstat (limited to 'related.tex')
-rw-r--r--related.tex8
1 files changed, 4 insertions, 4 deletions
diff --git a/related.tex b/related.tex
index a4956e9..e2c3138 100644
--- a/related.tex
+++ b/related.tex
@@ -20,10 +20,10 @@ a truthful, $O(\log^3 n)$-approximate mechanism
\subsection{Data Markets}
- A series of recent papers \cite{mcsherrytalwar,approximatemechanismdesign,xiao:privacy-truthfulness,chen:privacy-truthfulness} consider the related problem of retrieving data from an \textit{unverified} database: the auctioneer cannot verify the data reported by individuals and therefore must incentivize them to report truthfully.
-\citeN{mcsherrytalwar} argue that \emph{differentially private} mechanisms offer a form of \emph{approximate truthfulness}: if users have a utility that depends on their privacy, reporting their data untruthfully can only increase their utility by a small amount. \citeN{xiao:privacy-truthfulness}, improving upon earlier work by~\citeN{approximatemechanismdesign}, constructs mechanisms that
-simultaneously achieve exact truthfulness as well as differential privacy. Eliciting private data through a \emph{survey} \cite{roth-liggett}, whereby individuals first decide whether to participate in the survey and then report their data,
- also falls under the unverified database setting \cite{xiao:privacy-truthfulness}. In the \emph{verified} database setting, \citeN{ghosh-roth:privacy-auction} and~\citeN{pranav} consider budgeted auctions where users have a utility again captured by differential privacy. Our work departs from the above setups in that utilities do not involve privacy, whose effects are assumed to be internalized in the costs reported by the users; crucially, we also assume that experiments are tamper-proof, and individuals can misreport their costs but not their values.
+ A series of recent papers \cite{mcsherrytalwar,approximatemechanismdesign,xiao:privacy-truthfulness,chen:privacy-truthfulness} consider the related problem of retrieving data from an \textit{unverified} database, where strategic users may misreport their data to a data analyst to ensure their privacy. \citeN{mcsherrytalwar} argue that \emph{differentially private} mechanisms offer a form of \emph{approximate truthfulness}: if users have a utility that depends on their privacy, reporting their data untruthfully can only increase their utility by a small amount. %\citeN{xiao:privacy-truthfulness}, improving upon earlier work by~\citeN{approximatemechanismdesign}, constructs mechanisms that simultaneously achieve exact truthfulness as well as differential privacy.
+We depart by assuming that experiment outcomes are tamper-proof, cannot be manipulated.
+A different set of papers \cite{ghosh-roth:privacy-auction,roth-liggett,pranav} consider a setting where data cannot be misreported, but the utility of users is a function of the differential privacy guarantee an analyst provides them. In contrast, any privacy costs in our setup are internalized in the costs $c_i$. %Eliciting private data through a \emph{survey} \cite{roth-liggett}, whereby individuals first decide whether to participate in the survey and then report their data,
+% also falls under the unverified database setting \cite{xiao:privacy-truthfulness}. In the \emph{verified} database setting, \citeN{ghosh-roth:privacy-auction} and~\citeN{pranav} consider budgeted auctions where users have a utility again captured by differential privacy. Our work departs from the above setups in that utilities do not involve privacy, whose effects are assumed to be internalized in the costs reported by the users; crucially, we also assume that experiments are tamper-proof, and individuals can misreport their costs but not their values.
\sloppy
Our work is closest to the survey setup of~\citeN{roth-schoenebeck}, who also consider how to sample individuals with different features who report a hidden value at a certain cost. The authors assume a prior on the joint distribution between costs and features, and wish to obtain an unbiased estimate of the expectation of the hidden value under the constraints of truthfulness, budget feasibility and individual rationality. Our work departs by learning a more general statistic (a linear model) than data means. We note that, as in \cite{roth-schoenebeck}, costs and features can be arbitrarily correlated (our results are prior-free).