From b19e9c8c9c49da4afa893134dcff8954e7a2c240 Mon Sep 17 00:00:00 2001 From: Stratis Ioannidis Date: Sat, 6 Jul 2013 00:08:44 -0700 Subject: intro related --- intro.tex | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) (limited to 'intro.tex') diff --git a/intro.tex b/intro.tex index 85f4f3f..3a2925b 100644 --- a/intro.tex +++ b/intro.tex @@ -41,7 +41,7 @@ subject to a budget constraint $\sum_{i\in S}c_i\leq B$, where $B$ is \E's budge \smallskip The objective function, which is the key, is formally obtained by optimizing the information gain in $\beta$ when the latter is learned through ridge regression, and is related to the so-called $D$-optimality criterion~\cite{pukelsheim2006optimal,atkinson2007optimum}. \item -We present a polynomial time, $\epsilon$-truthful mechanism for \SEDP{}, yielding a constant factor ($\approx 12.98$) approximation to the optimal value of \eqref{obj}. In contrast to this, we show that no truthful, budget-feasible mechanisms are possible for \SEDP{} within a factor 2 approximation. +We present a polynomial time, $\delta$-truthful mechanism for \SEDP{}, yielding a constant factor ($\approx 12.98$) approximation to the optimal value of \eqref{obj}. In contrast to this, we show that no truthful, budget-feasible mechanisms are possible for \SEDP{} within a factor 2 approximation. \smallskip We note that the objective \eqref{obj} is submodular. Using this fact, applying previous results on budget feasible mechanism design under general submodular objectives~\cite{singer-mechanisms,chen} would yield either a deterministic, truthful, constant-approximation mechanism that requires exponential time, or a non-deterministic, (universally) truthful, poly-time mechanism that yields a constant approximation ratio only \emph{in expectation} (\emph{i.e.}, its approximation guarantee for a given instance may in fact be unbounded). @@ -59,7 +59,12 @@ We note that the objective \eqref{obj} is submodular. Using this fact, applying %Though such mechanisms were known to exist for combinatorial problems with specific submodular objectives such as \textsc{Knapsack} or \textsc{Coverage}~\cite{singer-mechanisms,chen, singer-influence}, these do not readily apply to the more complicated linear-algebraic objective function \eqref{obj} of \SEDP. %{\bf S+T: could we verify that the above sentence is correct in its implication?} -From a technical perspective, we present a convex relaxation of \eqref{obj}, and show that it is within a constant factor from the so-called multi-linear relaxation of \eqref{obj}, which in turn can be related to \eqref{obj} through pipage rounding. We establish the constant factor to the multi-linear relaxation by bounding the partial derivatives of these two functions; we achieve the latter by exploiting convexity properties of matrix functions over the convex cone of positive semidefinite matrices. +%From a technical perspective, we present a convex relaxation of \eqref{obj}, and show that its optimal value is within a constant factor from the so-called multi-linear relaxation of \eqref{obj}, which in turn can be related to \eqref{obj} through pipage rounding. We establish the constant factor to the multi-linear relaxation by bounding the partial derivatives of these two functions; we achieve the latter by exploiting convexity properties of matrix functions over the convex cone of positive semidefinite matrices. + +From a technical perspective, we propose a convex optimization problem and establish that its optimal value within a constant factor from the optimal value of \EDP. + In particular, we show our relaxed objective is within a constant factor from the so-called multi-linear extension of \eqref{obj}, which in turn can be related to \eqref{obj} through pipage rounding. We establish the constant factor to the multi-linear extension by bounding the partial derivatives of these two functions; we achieve the latter by exploiting convexity properties of matrix functions over the convex cone of positive semidefinite matrices. + +Our convex relaxation of \EDP{} involves maximizing a self-concordant function subject to linear constraints. Its optimal value can be computed with arbitrary accuracy in polynomial time using the so-called barrier method. However, the outcome of this computation may not nevessarily be monotone, a property needed in designing a truthful mechanism. Nevetheless, we construct an algorithm that solves the above convex relaxation and is ``almost'' monotone; in turn, we also show that this algorithm can be employed to design a $\delta$-truthful mechanism for \EDP{}. %This allows us to adopt the approach followed by prior work in budget feasible mechanisms by Chen \emph{et al.}~\cite{chen} and Singer~\cite{singer-influence}. %{\bf FIX the last sentence} @@ -68,7 +73,7 @@ From a technical perspective, we present a convex relaxation of \eqref{obj}, and %Our approach to mechanisms for experimental design --- by % optimizing the information gain in parameters like $\beta$ which are estimated through the data analysis process --- is general. We give examples of this approach beyond linear regression to a general class that includes logistic regression and learning binary functions, and show that the corresponding budgeted mechanism design problem is also expressed through a submodular optimization. Hence, prior work \cite{chen,singer-mechanisms} immediately applies, and gives randomized, universally truthful, polynomial time, constant factor approximation mechanisms for problems in this class. Getting deterministic, truthful, polynomial time mechanisms with a constant approximation factor for this class or specific problems in it, like we did for \EDP, remains an open problem. -In what follows, we describe related work in Section~\ref{sec:related}. We briefly review experimental design and budget feasible mechanisms in Section~\ref{sec:peel} and define \SEDP\ formally. In Section~\ref{sec:main} we present our mechanism for \SEDP\ and state our main results. A generalization of our framework to machine learning tasks beyond linear regression is presented in Section~\ref{sec:ext}. +In what follows, we describe related work in Section~\ref{sec:related}. We briefly review experimental design and budget feasible mechanisms in Section~\ref{sec:peel} and define \SEDP\ formally. We present our convex relaxation to \EDP{} in Section~\ref{sec:approximation} and, finally, show how it can be used to construct our mechanism in Section~\ref{sec:main}. %we present our mechanism for \SEDP\ and state our main results. %A generalization of our framework to machine learning tasks beyond linear regression is presented in Section~\ref{sec:ext}. \junk{ -- cgit v1.2.3-70-g09d2