summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-07 18:32:36 -0700
committerStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-07 18:32:36 -0700
commitda4fe3de47f808d2aa77895880b5866f56cc066d (patch)
tree6e39a6cd6162371aed78fedbc769118c31846dfd
parentd32f92b8f4a4e373b4ff82b3c64d3a69c5cf9c68 (diff)
downloadrecommendation-da4fe3de47f808d2aa77895880b5866f56cc066d.tar.gz
concl
-rw-r--r--conclusion.tex3
1 files changed, 2 insertions, 1 deletions
diff --git a/conclusion.tex b/conclusion.tex
index 0935b62..667a0bb 100644
--- a/conclusion.tex
+++ b/conclusion.tex
@@ -2,8 +2,9 @@ We have proposed a convex relaxation for \EDP, and showed that it can be used to
A natural question to ask is to what extent the results we present here generalize to other machine learning tasks beyond linear regression. We outline a path in pursueing such generalizations in Appendix~\ref{sec:ext}. In particular, although the information gain is not generally a submodular function, we show that for a wide class of models, in experiments outcomes are perturbed by independent noise, the information does indeed exhibit submodularity. Several important learning tasks fall under this category, including generalized linear regression, logistic regression, \emph{etc.} In light of this, it would be interesting to investigate whether our convex relaxation approach generalizes to other learning tasks in this broader class.
-The literature on experimental design includes several other optimality criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}. Many can be seen as scalarizations (\emph{i.e.}, scalar mappings) of the the matrix $(X_S^TX_T)^{-1}$---the $\log\det$ being one of them. Studying such alternative objectives, even within the linear regression setting we study here, is also an interesting related problem. Crucially, our convex relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$ scalarization with the expectation appearing in the multi-linear extension \eqref{eq:multi-linear}. The same swap is known to yield concave objectives for several other optimality criteria, even when the latter are not necessarily submodular (see, \emph{e.g.}, \citeN{boyd2004convex}). Exploiting the convexity of such relaxations to design budget feasible mechanisms, is an additonal open problem of interest.
+The literature on experimental design includes several other optimality criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}. Our convex relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$ scalarization with the expectation appearing in the multi-linear extension \eqref{eq:multi-linear}. The same swap is known to yield concave objectives for several other optimality criteria, even when the latter are not submodular (see, \emph{e.g.}, \citeN{boyd2004convex}). Exploiting the convexity of such relaxations to design budget feasible mechanisms is an additonal open problem of interest.
+%Many can be seen as scalarizations (\emph{i.e.}, scalar mappings) of the the matrix $(X_S^TX_T)^{-1}$---the $\log\det$ being one of them. Studying such alternative objectives, even within the linear regression setting we study here, is also an interesting related problem. Crucially, o
%To be written. Will contain
%(a) list of extensions with a forward pointer to Appendix
%(b) some concluding remark that we initiated the area, the opt criteria is not a priori clear, etc.