diff options
| -rwxr-xr-x | conclusion.tex | 15 | ||||
| -rwxr-xr-x | paper.tex | 4 |
2 files changed, 13 insertions, 6 deletions
diff --git a/conclusion.tex b/conclusion.tex index 6a3917e..94d8202 100755 --- a/conclusion.tex +++ b/conclusion.tex @@ -1,6 +1,11 @@ -We have proposed a convex relaxation for \EDP, and showed that it can be used to design a $\delta$-truthful, constant approximation mechanism that runs in polynomial time. Our objective function, commonly known as the Bayes $D$-optimality criterion, is motivated by linear regression, and in particular captures the information gain when experiments are used to learn a linear model. %in \reals^d. +We have proposed a convex relaxation for \EDP, and showed that it can be used +to design a $\delta$-truthful, constant approximation mechanism that runs in +polynomial time. Our objective function, commonly known as the Bayes +$D$-optimality criterion, is motivated by linear regression. +%and in particular captures the information gain when experiments are used to learn a linear model in \reals^d. -A natural question to ask is to what extent the results we present here +A natural question to ask is to what extent the results +%we present here generalize to other machine learning tasks beyond linear regression. We outline a path in pursuing such generalizations in Appendix~\ref{sec:ext}. In particular, although the information gain is not generally a submodular @@ -9,15 +14,15 @@ outcomes are perturbed by independent noise, the information gain indeed exhibits submodularity. Several important learning tasks fall under this category, including generalized linear regression, logistic regression, \emph{etc.} In light of this, it would be interesting to investigate whether -our convex relaxation approach generalizes to other learning tasks in this -broader class. +our convex relaxation approach generalizes to other tasks in this broader class. The literature on experimental design includes several other optimality criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}. Our convex relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$ scalarization with the expectation appearing in the multi-linear extension \eqref{eq:multi-linear}. The same swap is known to yield concave objectives for -several other optimality criteria, even when the latter are not submodular +several other optimality criteria +%, even when the latter are not submodular (see, \emph{e.g.}, \citeN{boyd2004convex}). Exploiting the convexity of such relaxations to design budget feasible mechanisms is an additional open problem of interest. @@ -1,4 +1,4 @@ -\documentclass[draft]{llncs} +\documentclass{llncs} \pagestyle{plain} \usepackage[numbers, sectionbib]{natbib} \usepackage[utf8]{inputenc} @@ -38,8 +38,10 @@ \input{main} \section{Conclusions}\label{sec:concl} \input{conclusion} +\begin{comment} \section*{Acknowledgments} \input{ack} +\end{comment} \bibliographystyle{splncsnat} \begin{footnotesize} \bibliography{notes} |
