From 1729c3e637ff54707bcfc3e386237fe425f4988c Mon Sep 17 00:00:00 2001 From: Stratis Ioannidis Date: Sun, 22 Sep 2013 23:16:45 +0200 Subject: concl --- conclusion.tex | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) (limited to 'conclusion.tex') diff --git a/conclusion.tex b/conclusion.tex index 94d8202..6b17237 100755 --- a/conclusion.tex +++ b/conclusion.tex @@ -1,30 +1,30 @@ -We have proposed a convex relaxation for \EDP, and showed that it can be used +We have proposed a convex relaxation for \EDP, and showed how to use it to design a $\delta$-truthful, constant approximation mechanism that runs in -polynomial time. Our objective function, commonly known as the Bayes -$D$-optimality criterion, is motivated by linear regression. +polynomial time. %Our objective function, commonly known as the Bayes $D$-optimality criterion, is motivated by linear regression. %and in particular captures the information gain when experiments are used to learn a linear model in \reals^d. - -A natural question to ask is to what extent the results +A natural question to ask is to what extent ou results %we present here generalize to other machine learning tasks beyond linear regression. We outline -a path in pursuing such generalizations in Appendix~\ref{sec:ext}. In -particular, although the information gain is not generally a submodular -function, we show that for a wide class of models, in which experiments -outcomes are perturbed by independent noise, the information gain indeed -exhibits submodularity. Several important learning tasks fall under this -category, including generalized linear regression, logistic regression, -\emph{etc.} In light of this, it would be interesting to investigate whether +a path to such a generalization in Appendix~\ref{sec:ext}: %. In +%particular, although the information gain is not generally a submodular +%function, we show that +for a wide class of models in which experiment +outcomes are perturbed by independent noise, the information gain +exhibits submodularity. %Several important learning tasks fall under this category, including generalized linear regression, logistic regression, \emph{etc.} +In light of this, it would be interesting to investigate whether our convex relaxation approach generalizes to other tasks in this broader class. - -The literature on experimental design includes several other optimality -criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}. Our convex -relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$ -scalarization with the expectation appearing in the multi-linear extension -\eqref{eq:multi-linear}. The same swap is known to yield concave objectives for -several other optimality criteria +Moreover, +the literature on experimental design includes several other optimality +criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}, many of which are convex %Our convex +%relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$ +%scalarization with the expectation appearing in the multi-linear extension +%\eqref{eq:multi-linear}. The same swap is known to yield concave objectives for +%several other optimality criteria %, even when the latter are not submodular -(see, \emph{e.g.}, \citeN{boyd2004convex}). Exploiting the convexity of such -relaxations to design budget feasible mechanisms is an additional open problem +%(see, \emph{e.g.}, +\cite{boyd2004convex}. Exploiting this % the convexity of such +%relaxations +to design budget feasible mechanisms is an additional open problem of interest. %Many can be seen as scalarizations (\emph{i.e.}, scalar mappings) of the the matrix $(X_S^TX_T)^{-1}$---the $\log\det$ being one of them. Studying such alternative objectives, even within the linear regression setting we study here, is also an interesting related problem. Crucially, o -- cgit v1.2.3-70-g09d2