diff options
| -rw-r--r-- | intro.tex | 2 | ||||
| -rw-r--r-- | notes.bib | 24 | ||||
| -rw-r--r-- | problem.tex | 3 | ||||
| -rw-r--r-- | related.tex | 16 |
4 files changed, 32 insertions, 13 deletions
@@ -73,7 +73,7 @@ Our convex relaxation of \EDP{} involves maximizing a self-concordant function s %Our approach to mechanisms for experimental design --- by % optimizing the information gain in parameters like $\beta$ which are estimated through the data analysis process --- is general. We give examples of this approach beyond linear regression to a general class that includes logistic regression and learning binary functions, and show that the corresponding budgeted mechanism design problem is also expressed through a submodular optimization. Hence, prior work \cite{chen,singer-mechanisms} immediately applies, and gives randomized, universally truthful, polynomial time, constant factor approximation mechanisms for problems in this class. Getting deterministic, truthful, polynomial time mechanisms with a constant approximation factor for this class or specific problems in it, like we did for \EDP, remains an open problem. -In what follows, we describe related work in Section~\ref{sec:related}. We briefly review experimental design and budget feasible mechanisms in Section~\ref{sec:peel} and define \SEDP\ formally. We present our convex relaxation to \EDP{} in Section~\ref{sec:approximation} and, finally, show how it can be used to construct our mechanism in Section~\ref{sec:main}; all our proofs of our technical results are provided in the appendix. %we present our mechanism for \SEDP\ and state our main results. %A generalization of our framework to machine learning tasks beyond linear regression is presented in Section~\ref{sec:ext}. +In what follows, we describe related work in Section~\ref{sec:related}. We briefly review experimental design and budget feasible mechanisms in Section~\ref{sec:peel} and define \SEDP\ formally. We present our convex relaxation to \EDP{} in Section~\ref{sec:approximation} and, finally, show how it can be used to construct our mechanism in Section~\ref{sec:main}; all proofs of our technical results are provided in the appendix. %we present our mechanism for \SEDP\ and state our main results. %A generalization of our framework to machine learning tasks beyond linear regression is presented in Section~\ref{sec:ext}. \junk{ @@ -50,6 +50,21 @@ year = 2012 year = {2011} } +@unpublished{kearns2012, + author = {Michael Kearns and + Mallesh M. Pai and + Aaron Roth and + Jonathan Ullman}, + title = {Mechanism Design in Large Games: Incentives and Privacy}, + year = {2012} +} + +@article{pai2013privacy, + title={Privacy and Mechanism Design}, + author={Pai, Mallesh and Roth, Aaron}, + journal={SIGecom Exchanges}, + year={2013} +} @inproceedings{approximatemechanismdesign, @@ -621,12 +636,11 @@ year = 2012 -@techreport{xiao:privacy-truthfulness, +@inproceedings{xiao:privacy-truthfulness, author = "David Xiao", title = "Is privacy compatible with truthfulness?", -institution = {{Cryptology ePrint Archive}}, -number = "2011/005", -year = "2011" +booktitle = {{ITCS}}, +year = "2013" } @inproceedings{valiant, @@ -678,7 +692,7 @@ author = "Alkiviadis G. Akritas and Evgenia K. Akritas and Genadii I. Malaschono } @inproceedings{briest-approximation, - title = {Approximation techniques for utilitarian mechanism design}, + title = {Approximation Techniques for Utilitarian Mechanism Design}, booktitle = {Proceedings of the thirty-seventh annual {ACM} symposium on Theory of computing}, author = {Briest, Patrick and Krysta, Piotr and V\"ocking, Berthold}, year = {2005}, diff --git a/problem.tex b/problem.tex index 89ffa9d..1001d7b 100644 --- a/problem.tex +++ b/problem.tex @@ -54,8 +54,7 @@ Maximizing $I(\beta;y_S)$ is therefore equivalent to maximizing $\log\det(R+ \T{ $D$-optimality criterion \cite{pukelsheim2006optimal,atkinson2007optimum,chaloner1995bayesian}. -Note that the estimator $\hat{\beta}$ is a linear map of $y_S$. As $y_S$ is a multidimensional normal r.v., so is $\hat{\beta}$; in particular, $\hat{\beta}$ has -covariance $\sigma^2(R+\T{X_S}X_S)^{-1}$. As such, maximizing $I(\beta;y_S)$ can alternatively be seen as a means of reducing the uncertainty on estimator $\hat{\beta}$ by minimizing the product of the eigenvalues of its covariance (as the latter equals the determinant). +%Note that the estimator $\hat{\beta}$ is a linear map of $y_S$. As $y_S$ is a multidimensional normal r.v., so is $\hat{\beta}$; in particular, $\hat{\beta}$ has covariance $\sigma^2(R+\T{X_S}X_S)^{-1}$. As such, maximizing $I(\beta;y_S)$ can alternatively be seen as a means of reducing the uncertainty on estimator $\hat{\beta}$ by minimizing the product of the eigenvalues of its covariance (as the latter equals the determinant). %An alternative interpretation, given that $(R+ \T{X_S}X_S)^{-1}$ is the covariance of the estimator $\hat{\beta}$, is that it tries to minimize the %which is indeed a function of the covariance matrix $(R+\T{X_S}X_S)^{-1}$. diff --git a/related.tex b/related.tex index f929c75..91b25c5 100644 --- a/related.tex +++ b/related.tex @@ -41,7 +41,7 @@ a truthful, $O(\log^3 n)$-approximate mechanism \cite{bei2012budget}. Moreover, in a Bayesian setup, assuming a prior distribution among the agent's costs, there exists a truthful mechanism with a 768/512-approximation ratio \cite{bei2012budget}. %(in terms of expectations) Posted price, rather than direct revelation mechanisms, are also studied in \cite{singerposted}. -\paragraph{Relaxations in Mechanism Design} +\paragraph{Monotone Approximations in Combinatorial Auctions} Relaxations of combinatorial problems are prevalent in \emph{combinatorial auctions}, % \cite{archer-approximate,lavi-truthful,dughmi-truthful,briest-approximation}, in which an auctioneer aims at maximizing a set function which is the sum of utilities of strategic bidders (\emph{i.e.}, the social welfare). As noted by \citeN{archer-approximate}, @@ -58,10 +58,16 @@ Beyond LP relaxations, \citeN{dughmi2011convex} propose truthful-in-expectation mechanisms for combinatorial auctions in which the bidders' utilities are matroid rank sum functions (applied earlier to the \textsc{CombinatorialPublicProjects} problem \cite{dughmi-truthful}). As in our case, their framework relies -on solving a convex optimization problem and ensuring that it is -``well-conditioned'', resembling the technical challenge we face in -Section~\ref{sec:monotonicity}. However, we seek a deterministic mechanism and $\delta$-truthfulness, not truthfulness-in-expectation; in addition, our objective is not a matroid rank sum function, so the approximation guarantees of convex relaxation in \cite{dughmi2011convex} do not readily extend to \EDP. -Closer to us, \citeN{briest-approximation} \ldots. However, we \ldots \note{NEEDS TO BE FIXED!} +on solving a convex optimization problem which can only be solved approximately. The authors ensure that a solver is applied to a +``well-conditioned'' problem, which resembles the technical challenge we face in +Section~\ref{sec:monotonicity}. However, we seek a deterministic mechanism and $\delta$-truthfulness, not truthfulness-in-expectation. In addition, our objective is not a matroid rank sum function. As such, both the methodology for dealing with problems that are not ``well-conditioned'' as well as the approximation guarantees of the convex relaxation in \cite{dughmi2011convex} do not readily extend to \EDP. + +\citeN{briest-approximation} construct monote FPTAS for problems that can be approximated through rounding techniques, which in turn can be used to construct truthful, deterministic, constant-approximation mechanisms for corresponding combinatorial auctions. \EDP is not readily approximable through such rounding techniques; as such, we rely on a relaxation to approximate it. + +\paragraph{$\delta$-Truthfulness and Differential Privacy} + +The notion of $\delta$-truthfulness has attracted considerable attention recently in the context of differential privacy (see, \emph{e.g.}, the survey by \citeN{pai2013privacy}). \citeN{mcsherrytalwar} were the first to observe that any $\epsilon$-differentially private mechanism must also be $\delta$-truthful in expectation, for $\delta=2\epsilon$. This property was used to construct $\delta$-truthful (in expectation) mechanisms for a digital goods auction~\cite{mcsherrytalwar} and for $\alpha$-approximate equilibrium selection \cite{kearns2012}. \citeN{approximatemechanismdesign} propose a framework for converting a differentially private mechanism to a truthful-in-expectation mechanism by randomly selecting between a differentially private mechanism with good approximation guarantees, an a truthful mechanism, and apply it to a mechanism for the facility location problem. We depart from the above works in seeking a deterministic mechanism for \EDP, and using a stronger notion of $\delta$-truthfulness. + \begin{comment} \paragraph{Data Markets} |
