diff options
Diffstat (limited to 'conclusion.tex')
| -rw-r--r-- | conclusion.tex | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/conclusion.tex b/conclusion.tex index 5d32a1e..6a3917e 100644 --- a/conclusion.tex +++ b/conclusion.tex @@ -1,12 +1,12 @@ -We have proposed a convex relaxation for \EDP, and showed that it can be used to design a $\delta$-truthful, constant approximation mechanism that runs in polynomial time. Our objective function, commonly known as the Bayes $D$-optimality criterion, is motivated from linear regression, and in particular captures the information gain when experiments are used to learn a linear model. %in \reals^d. +We have proposed a convex relaxation for \EDP, and showed that it can be used to design a $\delta$-truthful, constant approximation mechanism that runs in polynomial time. Our objective function, commonly known as the Bayes $D$-optimality criterion, is motivated by linear regression, and in particular captures the information gain when experiments are used to learn a linear model. %in \reals^d. A natural question to ask is to what extent the results we present here generalize to other machine learning tasks beyond linear regression. We outline a path in pursuing such generalizations in Appendix~\ref{sec:ext}. In particular, although the information gain is not generally a submodular function, we show that for a wide class of models, in which experiments -outcomes are perturbed by independent noise, the information does indeed -exhibit submodularity. Several important learning tasks fall under this +outcomes are perturbed by independent noise, the information gain indeed +exhibits submodularity. Several important learning tasks fall under this category, including generalized linear regression, logistic regression, \emph{etc.} In light of this, it would be interesting to investigate whether our convex relaxation approach generalizes to other learning tasks in this |
