diff options
| author | Thibaut Horel <thibaut.horel@gmail.com> | 2013-09-22 17:08:04 -0400 |
|---|---|---|
| committer | Thibaut Horel <thibaut.horel@gmail.com> | 2013-09-22 17:08:04 -0400 |
| commit | ca4d1e1ae5038ab392d35d6815ff4f5f49fb188c (patch) | |
| tree | 0f8c9695029a2deccf4866f83bbc3f9bb97df460 /conclusion.tex | |
| parent | 6989616462eea7f534d17bb6aa7ebdf4e172db4d (diff) | |
| download | recommendation-ca4d1e1ae5038ab392d35d6815ff4f5f49fb188c.tar.gz | |
Comment acks out and reduce conclusion
Diffstat (limited to 'conclusion.tex')
| -rwxr-xr-x | conclusion.tex | 15 |
1 files changed, 10 insertions, 5 deletions
diff --git a/conclusion.tex b/conclusion.tex index 6a3917e..94d8202 100755 --- a/conclusion.tex +++ b/conclusion.tex @@ -1,6 +1,11 @@ -We have proposed a convex relaxation for \EDP, and showed that it can be used to design a $\delta$-truthful, constant approximation mechanism that runs in polynomial time. Our objective function, commonly known as the Bayes $D$-optimality criterion, is motivated by linear regression, and in particular captures the information gain when experiments are used to learn a linear model. %in \reals^d. +We have proposed a convex relaxation for \EDP, and showed that it can be used +to design a $\delta$-truthful, constant approximation mechanism that runs in +polynomial time. Our objective function, commonly known as the Bayes +$D$-optimality criterion, is motivated by linear regression. +%and in particular captures the information gain when experiments are used to learn a linear model in \reals^d. -A natural question to ask is to what extent the results we present here +A natural question to ask is to what extent the results +%we present here generalize to other machine learning tasks beyond linear regression. We outline a path in pursuing such generalizations in Appendix~\ref{sec:ext}. In particular, although the information gain is not generally a submodular @@ -9,15 +14,15 @@ outcomes are perturbed by independent noise, the information gain indeed exhibits submodularity. Several important learning tasks fall under this category, including generalized linear regression, logistic regression, \emph{etc.} In light of this, it would be interesting to investigate whether -our convex relaxation approach generalizes to other learning tasks in this -broader class. +our convex relaxation approach generalizes to other tasks in this broader class. The literature on experimental design includes several other optimality criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}. Our convex relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$ scalarization with the expectation appearing in the multi-linear extension \eqref{eq:multi-linear}. The same swap is known to yield concave objectives for -several other optimality criteria, even when the latter are not submodular +several other optimality criteria +%, even when the latter are not submodular (see, \emph{e.g.}, \citeN{boyd2004convex}). Exploiting the convexity of such relaxations to design budget feasible mechanisms is an additional open problem of interest. |
