summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-07 17:33:12 -0700
committerStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-07 17:33:12 -0700
commitd32f92b8f4a4e373b4ff82b3c64d3a69c5cf9c68 (patch)
treeb1eb3c47abf6545198b09bd5d1bd6ea606411a56
parent240bbb99052ea5e873128649d01228442d889a33 (diff)
downloadrecommendation-d32f92b8f4a4e373b4ff82b3c64d3a69c5cf9c68.tar.gz
conclusions
-rw-r--r--conclusion.tex12
-rw-r--r--notes.bib2
-rw-r--r--related.tex4
3 files changed, 12 insertions, 6 deletions
diff --git a/conclusion.tex b/conclusion.tex
index 695dfb7..0935b62 100644
--- a/conclusion.tex
+++ b/conclusion.tex
@@ -1,4 +1,10 @@
-To be written. Will contain
-(a) list of extensions with a forward pointer to Appendix
-(b) some concluding remark that we initiated the area, the opt criteria is not a priori clear, etc.
+We have proposed a convex relaxation for \EDP, and showed that it can be used to design a $\delta$-truthful, constant approximation mechanism that runs in polynomial time. Our objective function, commonly known as the Bayes $D$-optimality criterion, is motivated from linear regression, and in particular captures the information gain when experiments are used to learn a linear model. %in \reals^d.
+
+A natural question to ask is to what extent the results we present here generalize to other machine learning tasks beyond linear regression. We outline a path in pursueing such generalizations in Appendix~\ref{sec:ext}. In particular, although the information gain is not generally a submodular function, we show that for a wide class of models, in experiments outcomes are perturbed by independent noise, the information does indeed exhibit submodularity. Several important learning tasks fall under this category, including generalized linear regression, logistic regression, \emph{etc.} In light of this, it would be interesting to investigate whether our convex relaxation approach generalizes to other learning tasks in this broader class.
+
+The literature on experimental design includes several other optimality criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}. Many can be seen as scalarizations (\emph{i.e.}, scalar mappings) of the the matrix $(X_S^TX_T)^{-1}$---the $\log\det$ being one of them. Studying such alternative objectives, even within the linear regression setting we study here, is also an interesting related problem. Crucially, our convex relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$ scalarization with the expectation appearing in the multi-linear extension \eqref{eq:multi-linear}. The same swap is known to yield concave objectives for several other optimality criteria, even when the latter are not necessarily submodular (see, \emph{e.g.}, \citeN{boyd2004convex}). Exploiting the convexity of such relaxations to design budget feasible mechanisms, is an additonal open problem of interest.
+
+%To be written. Will contain
+%(a) list of extensions with a forward pointer to Appendix
+%(b) some concluding remark that we initiated the area, the opt criteria is not a priori clear, etc.
diff --git a/notes.bib b/notes.bib
index ace9ca7..61affef 100644
--- a/notes.bib
+++ b/notes.bib
@@ -81,7 +81,7 @@ year = 2012
author = {Frank McSherry and
Kunal Talwar},
title = {Mechanism Design via Differential Privacy},
- booktitle = {Proc. FOCS},
+ booktitle = {FOCS},
year = {2007}
}
diff --git a/related.tex b/related.tex
index 91b25c5..6cf059f 100644
--- a/related.tex
+++ b/related.tex
@@ -62,11 +62,11 @@ on solving a convex optimization problem which can only be solved approximately.
``well-conditioned'' problem, which resembles the technical challenge we face in
Section~\ref{sec:monotonicity}. However, we seek a deterministic mechanism and $\delta$-truthfulness, not truthfulness-in-expectation. In addition, our objective is not a matroid rank sum function. As such, both the methodology for dealing with problems that are not ``well-conditioned'' as well as the approximation guarantees of the convex relaxation in \cite{dughmi2011convex} do not readily extend to \EDP.
-\citeN{briest-approximation} construct monote FPTAS for problems that can be approximated through rounding techniques, which in turn can be used to construct truthful, deterministic, constant-approximation mechanisms for corresponding combinatorial auctions. \EDP is not readily approximable through such rounding techniques; as such, we rely on a relaxation to approximate it.
+\citeN{briest-approximation} construct monotone FPTAS for problems that can be approximated through rounding techniques, which in turn can be used to construct truthful, deterministic, constant-approximation mechanisms for corresponding combinatorial auctions. \EDP{} is not readily approximable through such rounding techniques; as such, we rely on a relaxation to approximate it.
\paragraph{$\delta$-Truthfulness and Differential Privacy}
-The notion of $\delta$-truthfulness has attracted considerable attention recently in the context of differential privacy (see, \emph{e.g.}, the survey by \citeN{pai2013privacy}). \citeN{mcsherrytalwar} were the first to observe that any $\epsilon$-differentially private mechanism must also be $\delta$-truthful in expectation, for $\delta=2\epsilon$. This property was used to construct $\delta$-truthful (in expectation) mechanisms for a digital goods auction~\cite{mcsherrytalwar} and for $\alpha$-approximate equilibrium selection \cite{kearns2012}. \citeN{approximatemechanismdesign} propose a framework for converting a differentially private mechanism to a truthful-in-expectation mechanism by randomly selecting between a differentially private mechanism with good approximation guarantees, an a truthful mechanism, and apply it to a mechanism for the facility location problem. We depart from the above works in seeking a deterministic mechanism for \EDP, and using a stronger notion of $\delta$-truthfulness.
+The notion of $\delta$-truthfulness has attracted considerable attention recently in the context of differential privacy (see, \emph{e.g.}, the survey by \citeN{pai2013privacy}). \citeN{mcsherrytalwar} were the first to observe that any $\epsilon$-differentially private mechanism must also be $\delta$-truthful in expectation, for $\delta=2\epsilon$. This property was used to construct $\delta$-truthful (in expectation) mechanisms for a digital goods auction~\cite{mcsherrytalwar} and for $\alpha$-approximate equilibrium selection \cite{kearns2012}. \citeN{approximatemechanismdesign} propose a framework for converting a differentially private mechanism to a truthful-in-expectation mechanism by randomly selecting between a differentially private mechanism with good approximation guarantees, and a truthful mechanism. They apply their framework to the \textsc{FacilityLocation} problem. We depart from the above works in seeking a deterministic mechanism for \EDP, and using a stronger notion of $\delta$-truthfulness.
\begin{comment}