summaryrefslogtreecommitdiffstats
path: root/conclusion.tex
diff options
context:
space:
mode:
authorStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-08 13:18:24 -0700
committerStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-08 13:18:24 -0700
commit8360348b640a56c004730036025b0f3f9f9ed9a2 (patch)
tree64f9f16a7bc058630fdb56c8c59d3bc9694570a6 /conclusion.tex
parentca7140f58fc666ae5b05a5cf4e3bf8ad09352b82 (diff)
downloadrecommendation-8360348b640a56c004730036025b0f3f9f9ed9a2.tar.gz
small
Diffstat (limited to 'conclusion.tex')
-rw-r--r--conclusion.tex6
1 files changed, 3 insertions, 3 deletions
diff --git a/conclusion.tex b/conclusion.tex
index 5d32a1e..6a3917e 100644
--- a/conclusion.tex
+++ b/conclusion.tex
@@ -1,12 +1,12 @@
-We have proposed a convex relaxation for \EDP, and showed that it can be used to design a $\delta$-truthful, constant approximation mechanism that runs in polynomial time. Our objective function, commonly known as the Bayes $D$-optimality criterion, is motivated from linear regression, and in particular captures the information gain when experiments are used to learn a linear model. %in \reals^d.
+We have proposed a convex relaxation for \EDP, and showed that it can be used to design a $\delta$-truthful, constant approximation mechanism that runs in polynomial time. Our objective function, commonly known as the Bayes $D$-optimality criterion, is motivated by linear regression, and in particular captures the information gain when experiments are used to learn a linear model. %in \reals^d.
A natural question to ask is to what extent the results we present here
generalize to other machine learning tasks beyond linear regression. We outline
a path in pursuing such generalizations in Appendix~\ref{sec:ext}. In
particular, although the information gain is not generally a submodular
function, we show that for a wide class of models, in which experiments
-outcomes are perturbed by independent noise, the information does indeed
-exhibit submodularity. Several important learning tasks fall under this
+outcomes are perturbed by independent noise, the information gain indeed
+exhibits submodularity. Several important learning tasks fall under this
category, including generalized linear regression, logistic regression,
\emph{etc.} In light of this, it would be interesting to investigate whether
our convex relaxation approach generalizes to other learning tasks in this