summaryrefslogtreecommitdiffstats
path: root/conclusion.tex
blob: 94d8202a0b7467a456063989eaaab2fd7e514bb5 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
We have proposed a convex relaxation for \EDP, and showed that it can be used
to design a $\delta$-truthful, constant approximation mechanism that runs in
polynomial time. Our objective function, commonly known as the Bayes
$D$-optimality criterion, is motivated by linear regression.
%and in particular captures the information gain when experiments are used to learn a linear model in \reals^d.

A natural question to ask is to what extent the results
%we present here
generalize to other machine learning tasks beyond linear regression. We outline
a path in pursuing such generalizations in Appendix~\ref{sec:ext}. In
particular, although the information gain is not generally a submodular
function, we show that for a wide class of models, in which experiments
outcomes are perturbed by independent noise, the information gain indeed
exhibits submodularity. Several important  learning tasks fall under this
category, including generalized linear regression, logistic regression,
\emph{etc.} In light of this, it would be interesting to investigate whether
our convex relaxation approach generalizes to other tasks in this broader class.

The literature on experimental design includes several other optimality
criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}. Our convex
relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$
scalarization with the expectation appearing in the multi-linear extension
\eqref{eq:multi-linear}. The same swap is known to yield concave objectives for
several other optimality criteria
%, even when the latter are not submodular
(see, \emph{e.g.}, \citeN{boyd2004convex}). Exploiting the convexity of such
relaxations to design budget feasible mechanisms is an additional open problem
of interest.

%Many can be seen as scalarizations (\emph{i.e.}, scalar mappings) of the the matrix $(X_S^TX_T)^{-1}$---the $\log\det$ being one of them. Studying such alternative objectives, even within the linear regression setting we study here, is also an interesting related problem. Crucially, o
%To be written. Will contain 
%(a) list of extensions with a forward pointer to Appendix
%(b) some concluding remark that we initiated the area, the opt criteria is not a priori clear, etc.