summaryrefslogtreecommitdiffstats
path: root/intro.tex
diff options
context:
space:
mode:
authorStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-07 21:21:23 -0700
committerStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-07 21:21:23 -0700
commit8562564268f83ca4ea0a4d6cd316a61f8b66f6b2 (patch)
treec49c9499bc314ccb386c2380b5514cc66f50e583 /intro.tex
parent56ef116dcbe0d4b81f7b5bc2d38d9d51add2c62a (diff)
downloadrecommendation-8562564268f83ca4ea0a4d6cd316a61f8b66f6b2.tar.gz
minor
Diffstat (limited to 'intro.tex')
-rw-r--r--intro.tex6
1 files changed, 3 insertions, 3 deletions
diff --git a/intro.tex b/intro.tex
index 9b51333..22d7a69 100644
--- a/intro.tex
+++ b/intro.tex
@@ -61,10 +61,10 @@ We note that the objective \eqref{obj} is submodular. Using this fact, applying
%From a technical perspective, we present a convex relaxation of \eqref{obj}, and show that its optimal value is within a constant factor from the so-called multi-linear relaxation of \eqref{obj}, which in turn can be related to \eqref{obj} through pipage rounding. We establish the constant factor to the multi-linear relaxation by bounding the partial derivatives of these two functions; we achieve the latter by exploiting convexity properties of matrix functions over the convex cone of positive semidefinite matrices.
-From a technical perspective, we propose a convex optimization problem and establish that its optimal value within a constant factor from the optimal value of \EDP.
+From a technical perspective, we propose a convex optimization problem and establish that its optimal value is within a constant factor from the optimal value of \EDP.
In particular, we show our relaxed objective is within a constant factor from the so-called multi-linear extension of \eqref{obj}, which in turn can be related to \eqref{obj} through pipage rounding. We establish the constant factor to the multi-linear extension by bounding the partial derivatives of these two functions; we achieve the latter by exploiting convexity properties of matrix functions over the convex cone of positive semidefinite matrices.
-Our convex relaxation of \EDP{} involves maximizing a self-concordant function subject to linear constraints. Its optimal value can be computed with arbitrary accuracy in polynomial time using the so-called barrier method. However, the outcome of this computation may not necessarily be monotone, a property needed in designing a truthful mechanism. Nevetheless, we construct an algorithm that solves the above convex relaxation and is ``almost'' monotone; we achieve this by applying the barrier method on a set perturbed constraints, over which our objective is ``sufficiently'' concave. In turn, we also show that this algorithm can be employed to design a $\delta$-truthful mechanism for \EDP{}.
+Our convex relaxation of \EDP{} involves maximizing a self-concordant function subject to linear constraints. Its optimal value can be computed with arbitrary accuracy in polynomial time using the so-called barrier method. However, the outcome of this computation may not necessarily be monotone, a property needed in designing a truthful mechanism. Nevetheless, we construct an algorithm that solves the above convex relaxation and is ``almost'' monotone; we achieve this by applying the barrier method on a set perturbed constraints, over which our objective is ``sufficiently'' concave. In turn, we show how to employ this algorithm to design a poly-time, $\delta$-truthful, constant-approximation mechanism for \EDP{}.
%This allows us to adopt the approach followed by prior work in budget feasible mechanisms by Chen \emph{et al.}~\cite{chen} and Singer~\cite{singer-influence}. %{\bf FIX the last sentence}
@@ -73,7 +73,7 @@ Our convex relaxation of \EDP{} involves maximizing a self-concordant function s
%Our approach to mechanisms for experimental design --- by
% optimizing the information gain in parameters like $\beta$ which are estimated through the data analysis process --- is general. We give examples of this approach beyond linear regression to a general class that includes logistic regression and learning binary functions, and show that the corresponding budgeted mechanism design problem is also expressed through a submodular optimization. Hence, prior work \cite{chen,singer-mechanisms} immediately applies, and gives randomized, universally truthful, polynomial time, constant factor approximation mechanisms for problems in this class. Getting deterministic, truthful, polynomial time mechanisms with a constant approximation factor for this class or specific problems in it, like we did for \EDP, remains an open problem.
-In what follows, we describe related work in Section~\ref{sec:related}. We briefly review experimental design and budget feasible mechanisms in Section~\ref{sec:peel} and define \SEDP\ formally. We present our convex relaxation to \EDP{} in Section~\ref{sec:approximation} and, finally, show how it can be used to construct our mechanism in Section~\ref{sec:main}; all proofs of our technical results are provided in the appendix. %we present our mechanism for \SEDP\ and state our main results. %A generalization of our framework to machine learning tasks beyond linear regression is presented in Section~\ref{sec:ext}.
+In what follows, we describe related work in Section~\ref{sec:related}. We briefly review experimental design and budget feasible mechanisms in Section~\ref{sec:peel} and define \SEDP\ formally. We present our convex relaxation to \EDP{} in Section~\ref{sec:approximation} and use it to construct our mechanism in Section~\ref{sec:main}. We conclude in Section~\ref{sec:concl}. All proofs of our technical results are provided in the appendix. %we present our mechanism for \SEDP\ and state our main results. %A generalization of our framework to machine learning tasks beyond linear regression is presented in Section~\ref{sec:ext}.
\junk{