summaryrefslogtreecommitdiffstats
path: root/conclusion.tex
diff options
context:
space:
mode:
Diffstat (limited to 'conclusion.tex')
-rwxr-xr-xconclusion.tex42
1 files changed, 21 insertions, 21 deletions
diff --git a/conclusion.tex b/conclusion.tex
index 94d8202..6b17237 100755
--- a/conclusion.tex
+++ b/conclusion.tex
@@ -1,30 +1,30 @@
-We have proposed a convex relaxation for \EDP, and showed that it can be used
+We have proposed a convex relaxation for \EDP, and showed how to use it
to design a $\delta$-truthful, constant approximation mechanism that runs in
-polynomial time. Our objective function, commonly known as the Bayes
-$D$-optimality criterion, is motivated by linear regression.
+polynomial time. %Our objective function, commonly known as the Bayes $D$-optimality criterion, is motivated by linear regression.
%and in particular captures the information gain when experiments are used to learn a linear model in \reals^d.
-
-A natural question to ask is to what extent the results
+A natural question to ask is to what extent ou results
%we present here
generalize to other machine learning tasks beyond linear regression. We outline
-a path in pursuing such generalizations in Appendix~\ref{sec:ext}. In
-particular, although the information gain is not generally a submodular
-function, we show that for a wide class of models, in which experiments
-outcomes are perturbed by independent noise, the information gain indeed
-exhibits submodularity. Several important learning tasks fall under this
-category, including generalized linear regression, logistic regression,
-\emph{etc.} In light of this, it would be interesting to investigate whether
+a path to such a generalization in Appendix~\ref{sec:ext}: %. In
+%particular, although the information gain is not generally a submodular
+%function, we show that
+for a wide class of models in which experiment
+outcomes are perturbed by independent noise, the information gain
+exhibits submodularity. %Several important learning tasks fall under this category, including generalized linear regression, logistic regression, \emph{etc.}
+In light of this, it would be interesting to investigate whether
our convex relaxation approach generalizes to other tasks in this broader class.
-
-The literature on experimental design includes several other optimality
-criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}. Our convex
-relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$
-scalarization with the expectation appearing in the multi-linear extension
-\eqref{eq:multi-linear}. The same swap is known to yield concave objectives for
-several other optimality criteria
+Moreover,
+the literature on experimental design includes several other optimality
+criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}, many of which are convex %Our convex
+%relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$
+%scalarization with the expectation appearing in the multi-linear extension
+%\eqref{eq:multi-linear}. The same swap is known to yield concave objectives for
+%several other optimality criteria
%, even when the latter are not submodular
-(see, \emph{e.g.}, \citeN{boyd2004convex}). Exploiting the convexity of such
-relaxations to design budget feasible mechanisms is an additional open problem
+%(see, \emph{e.g.},
+\cite{boyd2004convex}. Exploiting this % the convexity of such
+%relaxations
+to design budget feasible mechanisms is an additional open problem
of interest.
%Many can be seen as scalarizations (\emph{i.e.}, scalar mappings) of the the matrix $(X_S^TX_T)^{-1}$---the $\log\det$ being one of them. Studying such alternative objectives, even within the linear regression setting we study here, is also an interesting related problem. Crucially, o