summaryrefslogtreecommitdiffstats
path: root/related.tex
diff options
context:
space:
mode:
Diffstat (limited to 'related.tex')
-rw-r--r--related.tex7
1 files changed, 3 insertions, 4 deletions
diff --git a/related.tex b/related.tex
index 01d114c..66d0d0e 100644
--- a/related.tex
+++ b/related.tex
@@ -1,9 +1,8 @@
\section{Related work}
\label{sec:related}
-\subsection{Experimental Design} The classic experimental design problem, which we also briefly review in Section~\ref{sec:edprelim}, deals with which $k$ experiments to conduct among a set of $n$ possible experiments. It is a well studied problem both in the non-Bayesian \cite{pukelsheim2006optimal,atkinson2007optimum,boyd2004convex} and Bayesian setting \cite{chaloner1995bayesian}. Beyond $D$-optimality, several other objectives are encountered in the literature \cite{pukelsheim2006optimal}; many involve some function of the covariance matrix of the estimate of $\beta$, such as $E$-optimality (maximizing the smallest eigenvalue of the covariance of $\beta$) or $T$-optimality (maximizing the trace). Our focus on $D$-optimality is motivated by both its tractability as well as its relationship to the information gain. %are encountered in the literature, though they do not relate to entropy as $D$-optimality. We leave the task of approaching the maximization of such objectives from a strategic point of view as an open problem.
-
-
+\junk{\subsection{Experimental Design} The classic experimental design problem, which we also briefly review in Section~\ref{sec:edprelim}, deals with which $k$ experiments to conduct among a set of $n$ possible experiments. It is a well studied problem both in the non-Bayesian \cite{pukelsheim2006optimal,atkinson2007optimum,boyd2004convex} and Bayesian setting \cite{chaloner1995bayesian}. Beyond $D$-optimality, several other objectives are encountered in the literature \cite{pukelsheim2006optimal}; many involve some function of the covariance matrix of the estimate of $\beta$, such as $E$-optimality (maximizing the smallest eigenvalue of the covariance of $\beta$) or $T$-optimality (maximizing the trace). Our focus on $D$-optimality is motivated by both its tractability as well as its relationship to the information gain. %are encountered in the literature, though they do not relate to entropy as $D$-optimality. We leave the task of approaching the maximization of such objectives from a strategic point of view as an open problem.
+}
\subsection{Budget Feasible Mechanisms for General Submodular Functions}
Budget feasible mechanism design was originally proposed by \citeN{singer-mechanisms}. Singer considers the problem of maximizing an arbitrary submodular function subject to a budget constraint in the \emph{value query} model, \emph{i.e.} assuming an oracle providing the value of the submodular objective on any given set.
Singer shows that there exists a randomized, 112-approximation mechanism for submodular maximization that is \emph{universally truthful} (\emph{i.e.}, it is a randomized mechanism sampled from a distribution over truthful mechanisms). \citeN{chen} improve this result by providing a 7.91-approximate mechanism, and show a corresponding lower bound of $2$ among universally truthful randomized mechanisms for submodular maximization.
@@ -29,7 +28,7 @@ Posted price, rather than direct revelation mechanisms, are also studied in \cit
\subsection{Data Markets}
- A series of recent papers \cite{mcsherrytalwar,approximatemechanismdesign,xiao:privacy-truthfulness,chen:privacy-truthfulness} consider the related problem of retrieving data from an \textit{unverified} database, where strategic users may misreport their data to ensure their privacy. \citeN{mcsherrytalwar} argue that \emph{differentially private} mechanisms offer a form of \emph{approximate truthfulness}: if users have a utility that depends on their privacy, reporting their data untruthfully can only increase their utility by a small amount. %\citeN{xiao:privacy-truthfulness}, improving upon earlier work by~\citeN{approximatemechanismdesign}, constructs mechanisms that simultaneously achieve exact truthfulness as well as differential privacy.
+ A series of recent papers \cite{mcsherrytalwar,approximatemechanismdesign,xiao:privacy-truthfulness,chen:privacy-truthfulness} consider the related problem of retrieving data from an \textit{unverified} database, where strategic users may misreport their data to ensure their privacy. %\citeN{mcsherrytalwar} argue that \emph{differentially private} mechanisms offer a form of \emph{approximate truthfulness}: if users have a utility that depends on their privacy, reporting their data untruthfully can only increase their utility by a small amount. %\citeN{xiao:privacy-truthfulness}, improving upon earlier work by~\citeN{approximatemechanismdesign}, constructs mechanisms that simultaneously achieve exact truthfulness as well as differential privacy.
We depart by assuming that experiment outcomes are tamper-proof and cannot be manipulated.
A different set of papers \cite{ghosh-roth:privacy-auction,roth-liggett,pranav} consider a setting where data cannot be misreported, but the utility of users is a function of the differential privacy guarantee an analyst provides them. We do not focus on privacy; any privacy costs in our setup are internalized in the costs $c_i$. %Eliciting private data through a \emph{survey} \cite{roth-liggett}, whereby individuals first decide whether to participate in the survey and then report their data,
% also falls under the unverified database setting \cite{xiao:privacy-truthfulness}. In the \emph{verified} database setting, \citeN{ghosh-roth:privacy-auction} and~\citeN{pranav} consider budgeted auctions where users have a utility again captured by differential privacy. Our work departs from the above setups in that utilities do not involve privacy, whose effects are assumed to be internalized in the costs reported by the users; crucially, we also assume that experiments are tamper-proof, and individuals can misreport their costs but not their values.