From 8b0ff8b04df3adf80bf9b5bc6c98c224a8bd8f46 Mon Sep 17 00:00:00 2001 From: Thibaut Horel Date: Mon, 8 Jul 2013 10:01:24 +0200 Subject: Fixing typos in the conclusion --- conclusion.tex | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/conclusion.tex b/conclusion.tex index 667a0bb..5d32a1e 100644 --- a/conclusion.tex +++ b/conclusion.tex @@ -1,8 +1,26 @@ We have proposed a convex relaxation for \EDP, and showed that it can be used to design a $\delta$-truthful, constant approximation mechanism that runs in polynomial time. Our objective function, commonly known as the Bayes $D$-optimality criterion, is motivated from linear regression, and in particular captures the information gain when experiments are used to learn a linear model. %in \reals^d. -A natural question to ask is to what extent the results we present here generalize to other machine learning tasks beyond linear regression. We outline a path in pursueing such generalizations in Appendix~\ref{sec:ext}. In particular, although the information gain is not generally a submodular function, we show that for a wide class of models, in experiments outcomes are perturbed by independent noise, the information does indeed exhibit submodularity. Several important learning tasks fall under this category, including generalized linear regression, logistic regression, \emph{etc.} In light of this, it would be interesting to investigate whether our convex relaxation approach generalizes to other learning tasks in this broader class. +A natural question to ask is to what extent the results we present here +generalize to other machine learning tasks beyond linear regression. We outline +a path in pursuing such generalizations in Appendix~\ref{sec:ext}. In +particular, although the information gain is not generally a submodular +function, we show that for a wide class of models, in which experiments +outcomes are perturbed by independent noise, the information does indeed +exhibit submodularity. Several important learning tasks fall under this +category, including generalized linear regression, logistic regression, +\emph{etc.} In light of this, it would be interesting to investigate whether +our convex relaxation approach generalizes to other learning tasks in this +broader class. -The literature on experimental design includes several other optimality criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}. Our convex relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$ scalarization with the expectation appearing in the multi-linear extension \eqref{eq:multi-linear}. The same swap is known to yield concave objectives for several other optimality criteria, even when the latter are not submodular (see, \emph{e.g.}, \citeN{boyd2004convex}). Exploiting the convexity of such relaxations to design budget feasible mechanisms is an additonal open problem of interest. +The literature on experimental design includes several other optimality +criteria~\cite{pukelsheim2006optimal,atkinson2007optimum}. Our convex +relaxation \eqref{eq:our-relaxation} involved swapping the $\log\det$ +scalarization with the expectation appearing in the multi-linear extension +\eqref{eq:multi-linear}. The same swap is known to yield concave objectives for +several other optimality criteria, even when the latter are not submodular +(see, \emph{e.g.}, \citeN{boyd2004convex}). Exploiting the convexity of such +relaxations to design budget feasible mechanisms is an additional open problem +of interest. %Many can be seen as scalarizations (\emph{i.e.}, scalar mappings) of the the matrix $(X_S^TX_T)^{-1}$---the $\log\det$ being one of them. Studying such alternative objectives, even within the linear regression setting we study here, is also an interesting related problem. Crucially, o %To be written. Will contain -- cgit v1.2.3-70-g09d2 From a9a63ce0c0f152df3a8c50abe852bf61617b72b7 Mon Sep 17 00:00:00 2001 From: Thibaut Horel Date: Mon, 8 Jul 2013 18:36:10 +0200 Subject: Fix theorem for non-homotropic prior --- general.tex | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/general.tex b/general.tex index 87475a4..b219247 100644 --- a/general.tex +++ b/general.tex @@ -15,15 +15,17 @@ analysis of the approximation ratio \emph{mutatis mutandis}, we get the followin Theorem~\ref{thm:main}. \begin{theorem} - There exists a truthful, individually rational and budget feasible mechanism for the objective - function $V$ given by \eqref{dcrit}. Furthermore, for any $\varepsilon - > 0$, in time $O(\text{poly}(n, d, \log\log \varepsilon^{-1}))$, - the algorithm computes a set $S^*$ such that: + For any $\delta>0$, there exists a $\delta$-truthful, individually rational + and budget feasible mechanism for the objective function $V$ given by + \eqref{dcrit}. Furthermore, let us denote by $\lambda$ (resp. $\Lambda$) the + smallest (resp. largest) eigenvalue of $R$, then for any $\varepsilon > 0$, in time + $O(\text{poly}(n, d, \log\log\frac{B\Lambda}{\varepsilon\delta b\lambda}))$, + the mechanism computes a set $S^*$ such that: \begin{displaymath} OPT \leq - \frac{5e-1}{e-1}\frac{2\mu}{\log(1+\mu)}V(S^*) + 5.1 + \varepsilon + \bigg(\frac{6e-2}{e-1}\frac{1/\lambda}{\log(1+1/\lambda)}+ 4.66\bigg)V(S^*) + + \varepsilon. \end{displaymath} -where $\mu$ is the smallest eigenvalue of $R$. \end{theorem} \subsection{Non-Bayesian Setting} -- cgit v1.2.3-70-g09d2