diff options
Diffstat (limited to 'conclusion.tex')
| -rw-r--r-- | conclusion.tex | 22 |
1 files changed, 20 insertions, 2 deletions
diff --git a/conclusion.tex b/conclusion.tex index 667a0bb..5d32a1e 100644 --- a/conclusion.tex +++ b/conclusion.tex @@ -1,8 +1,26 @@ We have proposed a convex relaxation for \EDP, and showed that it can be used to design a $\delta$-truthful, constant approximation mechanism that runs in polynomial time. Our objective function, commonly known as the Bayes $D$-optimality criterion, is motivated from linear regression, and in particular captures the information gain when experiments are used to learn a linear model. %in \reals^d. -A natural question to ask is to what extent the results we present here generalize to other machine learning tasks beyond linear regression. We outline a path in pursueing such generalizations in Appendix~\ref{sec:ext}. In particular, although the information gain is not generally a submodular function, we show that for a wide class of models, in experiments outcomes are perturbed by independent noise, the information does indeed exhibit submodularity. Several important learning tasks fall under this category, including generalized linear regression, logistic regression, \emph{etc.} In light of this, it would be interesting to investigate whether our convex relaxation approach generalizes to other learning tasks in this broader class. +A natural question to ask is to what extent the results we present here +generalize to other machine learning tasks beyond linear regression. We outline +a path in pursuing such generalizations in Appendix~\ref{sec:ext}. In +particular, although the information gain is not generally a submodular +function, we show that for a wide class of models, in which experiments +outcomes are perturbed by independent noise, the information does indeed +exhibit submodularity. Several important learning tasks fall under this +category, including generalized linear regression, logistic regression, +\emph{etc.} In light of this, it would be interesting to investigate whether +our convex relaxation approach generalizes to other learning tasks in this +broader class. -The literature on experimental design includes several other optimality criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}. Our convex relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$ scalarization with the expectation appearing in the multi-linear extension \eqref{eq:multi-linear}. The same swap is known to yield concave objectives for several other optimality criteria, even when the latter are not submodular (see, \emph{e.g.}, \citeN{boyd2004convex}). Exploiting the convexity of such relaxations to design budget feasible mechanisms is an additonal open problem of interest. +The literature on experimental design includes several other optimality +criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}. Our convex +relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$ +scalarization with the expectation appearing in the multi-linear extension +\eqref{eq:multi-linear}. The same swap is known to yield concave objectives for +several other optimality criteria, even when the latter are not submodular +(see, \emph{e.g.}, \citeN{boyd2004convex}). Exploiting the convexity of such +relaxations to design budget feasible mechanisms is an additional open problem +of interest. %Many can be seen as scalarizations (\emph{i.e.}, scalar mappings) of the the matrix $(X_S^TX_T)^{-1}$---the $\log\det$ being one of them. Studying such alternative objectives, even within the linear regression setting we study here, is also an interesting related problem. Crucially, o %To be written. Will contain |
