diff options
Diffstat (limited to 'main.tex')
| -rw-r--r-- | main.tex | 229 |
1 files changed, 119 insertions, 110 deletions
@@ -12,16 +12,15 @@ This motivates us to look at $$V(S) = \log\det(I_d+\T{X_S}X_S).$$ Interesting fo \subsection{Truthful, Constant Approximation Mechanism} -In this section we present a mechanism for the problem described in -section~\ref{sec:auction}. Previous works on maximizing submodular functions -\cite{nemhauser, sviridenko-submodular} and desiging auction mechanisms for -submodular utility functions \cite{singer-mechanisms, chen, singer-influence} -rely on a greedy heuristic. In this heuristic, points which maximize the -\emph{marginal-contribution-per-cost} ratio are greedily added to the solution -set. The \emph{marginal-contribution-per-cost} ratio of a point $i$ with cost -$c_i$ to the set $S$ is defined by: +In this section we present a mechanism for \EDP. Previous works on maximizing +submodular functions \cite{nemhauser, sviridenko-submodular} and designing +auction mechanisms for submodular utility functions \cite{singer-mechanisms, +chen, singer-influence} rely on a greedy heuristic. In this heuristic, points +are added to the solution set according to the following greedy selection rule: +assume that you have already selected a set $S$ of point, then the next point +to be selected is: \begin{displaymath} - \frac{V(S\cup\{i\}) - V(S)}{c_i} + i = \argmax_{j\in\mathcal{N}\setminus S}\frac{V(S\cup\{i\}) - V(S)}{c_i} \end{displaymath} This is the generalization of the \emph{value-per-cost} ratio used in greedy heuristic for knapsack problems. However note that for general submodular @@ -32,12 +31,25 @@ Unfortunately, even for the non-strategic case, the greedy heuristic gives an unbounded approximation ratio. It has been noted by Khuller et al. \cite{khuller} that for the maximum coverage problem, taking the maximum between the greedy solution and the point of maximum value gives -a $\frac{2e}{e-1}$ approximation ratio. In the general case, lemma 3.1 from -\cite{singer-influence} which follows from \cite{chen}, shows that this has an -approximation ratio of $\frac{5e}{e-1}$ (see lemma~\ref{lemma:greedy-bound} -below). However, Singer \cite{singer-influence} notes that this approach breaks -incentive compatibility and therefore cannot be directly applied to the -strategic case. +a $\frac{2e}{e-1}$ approximation ratio. In the general case, let us recall +lemma 3.1 from \cite{singer-influence} which follows from \cite{chen}: + +\begin{lemma}[Singer \cite{singer-influence}]\label{lemma:greedy-bound} +Let $S_G$ be the set computed by the greedy heuristic and $i^*$ the point of +maximum value: +\begin{displaymath} + i^* = \argmax_{i\in\mathcal{N}} V(i) +\end{displaymath} +then the following inequality holds: +\begin{displaymath} + OPT(V,\mathcal{N},B) \leq \frac{5e}{e-1}\max\big\{ V(S_G), V(i^*)\big\} +\end{displaymath} +\end{lemma} + +Hence, taking the maximum between the greedy set and the point of maximum value +has an approximation ratio of $\frac{5e}{e-1}$. However, Singer +\cite{singer-influence} notes that this approach breaks incentive compatibility +and therefore cannot be directly applied to the strategic case. Two approaches have been studied to deal with the strategic case and rely on comparing the point of maximum value to a quantity which can be proven to be @@ -52,17 +64,17 @@ a relaxation of the value function which can be proven to have a constant approximation ratio to the value function. \end{itemize} -Here, we will use a specific relaxation of the value function \eqref{vs}. Let -us define the function $L_\mathcal{N}$: +Here, we will use the following relaxation of the value function \eqref{vs}. +Let us define the function $L_\mathcal{N}$: \begin{displaymath} \forall\lambda\in[0,1]^{|\mathcal{N}|}\,\quad L_{\mathcal{N}}(\lambda) \defeq \log\det\left(I_d + \mu\sum_{i\in\mathcal{N}} \lambda_i x_i \T{x_i}\right) \end{displaymath} -Our mechanism for ridge regression is presented in Algorithm~\ref{mechanism}. +Our mechanism for \EDP{} is presented in Algorithm~\ref{mechanism}. \begin{algorithm} - \caption{Mechanism for ridge regression}\label{mechanism} + \caption{Mechanism for \EDP{}}\label{mechanism} \begin{algorithmic}[1] \State $i^* \gets \argmax_{j\in\mathcal{N}}V(j)$ \State $x^* \gets \argmax_{x\in[0,1]^{|\mathcal{N}|}} \{L_{\mathcal{N}\setminus\{i^*\}}(x) @@ -72,13 +84,13 @@ Our mechanism for ridge regression is presented in Algorithm~\ref{mechanism}. \State \textbf{return} $\{i^*\}$ \Else \State $i \gets \argmax_{1\leq j\leq n}\frac{V(j)}{c_j}$ - \State $S \gets \emptyset$ - \While{$c_i\leq \frac{B}{2}\frac{V(S\cup\{i\})-V(S)}{V(S\cup\{i\})}$} - \State $S \gets S\cup\{i\}$ - \State $i \gets \argmax_{j\in\mathcal{N}\setminus S} - \frac{V(S\cup\{j\})-V(S)}{c_j}$ + \State $S_G \gets \emptyset$ + \While{$c_i\leq \frac{B}{2}\frac{V(S_G\cup\{i\})-V(S_G)}{V(S_G\cup\{i\})}$} + \State $S_G \gets S_G\cup\{i\}$ + \State $i \gets \argmax_{j\in\mathcal{N}\setminus S_G} + \frac{V(S_G\cup\{j\})-V(S_G)}{c_j}$ \EndWhile - \State \textbf{return} $S$ + \State \textbf{return} $S_G$ \EndIf \end{algorithmic} \end{algorithm} @@ -89,15 +101,15 @@ Our mechanism for ridge regression is presented in Algorithm~\ref{mechanism}. lemma~\ref{lemma:concave}) thus the maximization step on line 2 of the mechanism can be computed in polynomial time, which proves that the mechanism overall has a polynomial complexity. - \item the stopping rule in the while loop is more sophiticated than just + \item the stopping rule in the while loop is more sophisticated than just checking against the budget constraint ($c(S) \leq B$). This is to ensure budget feasibility (see lemma~\ref{lemma:budget-feasibility}). \end{enumerate} We can now state the main result of this section: -\begin{theorem} +\begin{theorem}\label{thm:main} The mechanism described in Algorithm~\ref{mechanism} is truthful, - individually rational, budget feasible. Furthermore, choosing: + individually rational and budget feasible. Furthermore, choosing: \begin{multline*} C = C^* = \frac{5e-1 + C_\mu(2e+1)}{2C_\mu(e-1)}\\ + \frac{\sqrt{C_\mu^2(1+2e)^2 @@ -115,17 +127,29 @@ We can now state the main result of this section: \end{displaymath} \end{theorem} -The proof will consist of the claims of the theorem broken down into lemmas. +\begin{proof} +\emph{Truthfulness.} The algorithm only describes the allocation rule. +However, it suffices to prove that the mechanism is monotone, then +Myerson's theorem (see TODO) ensures us that by paying each allocated +user his threshold payment yields a truthful mechanism. The proof of +the monotonicity has already been done in \cite{singer-influence} and +is given here in lemma~\ref{lemma:monotone} below for the sake of +completeness. -Note that this is a single parameter mechanism. Hence, by using Myerson's -characterization of truthful mechanisms \cite{myerson}, proving truthfulness -amounts to proving the monotonicity of the mechanism: if a user is selected by -the mechanism when reporting a cost $c_i$, then he is still selected when -reporting another cost $c_i'\leq c_i$ provided that the remaining users do not -change their costs. +\emph{Budget feasibility.} Thanks to the analysis of the threshold +payment in \cite{chen}, the budget feasibility follows easily. The +proof is given in lemma~\ref{lemma:budget-feasibility} below. -We prove the monotonicity of the mechanism in lemma~\ref{lemma:monotone} below. -The proof is similar to the one of lemma 3.2 in \cite{singer-influence}. +\emph{Approximation ratio.} The proof of the approximation ratio follows the +same path as in \cite{chen} and is done in lemma~\ref{lemma:approx}. Our main +contribution is to prove that the relaxation $L_\mathcal{N}$ has a constant +approximation ratio to the optimal solution (lemma~\ref{lemma:relaxation}). This +follows from the \emph{pipage rounding} method described in \cite{pipage} where +$L_\mathcal{N}$ is first compared to the \emph{multilinear relaxation} of the +value function (lemma~\ref{lemma:relaxation-ratio}) for which it is possible to +round solutions (making fractional components integral) while maintaining +feasibility (lemma~\ref{lemma:rounding}). +\end{proof} \begin{lemma}\label{lemma:monotone} The mechanism is monotone. @@ -166,56 +190,32 @@ The mechanism is budget feasible. \end{lemma} \begin{proof} -Let us denote by $S_M$ the set selected by the greedy heuristic in the -mechanism of Algorithm~\ref{mechanism}. Let $i\in S_M$, let us also denote by +Let us denote by $S_G$ the set selected by the greedy heuristic in the +mechanism of Algorithm~\ref{mechanism}. Let $i\in S_G$, let us also denote by $S_i$ the solution set that was selected by the greedy heuristic before $i$ was added. Recall from \cite{chen} that the following holds for any submodular function: if the point $i$ was selected by the greedy heuristic, then: \begin{equation}\label{eq:budget} - c_i \leq \frac{V(S_i\cup\{i\}) - V(S)}{V(S_M)} B + c_i \leq \frac{V(S_i\cup\{i\}) - V(S)}{V(S_G)} B \end{equation} -Assume now that our mechanism selects point $i^*$. In this case, his payement +Assume now that our mechanism selects point $i^*$. In this case, his payment his $B$ and the mechanism is budget-feasible. -Otherwise, the mechanism selects the set $S_M$. In this case, \eqref{eq:budget} -shows that the threshold payement of user $i$ is bounded by: +Otherwise, the mechanism selects the set $S_G$. In this case, \eqref{eq:budget} +shows that the threshold payment of user $i$ is bounded by: \begin{displaymath} -\frac{V(S_i\cup\{i\}) - V(S_i)}{V(S_M)} B +\frac{V(S_i\cup\{i\}) - V(S_i)}{V(S_G)} B \end{displaymath} -Hence, the total payement is bounded by: +Hence, the total payment is bounded by: \begin{displaymath} - \sum_{i\in S_M} \frac{V(S_i\cup\{i\}) - V(S_i)}{V(S_M)} B \leq B + \sum_{i\in S_M} \frac{V(S_i\cup\{i\}) - V(S_i)}{V(S_G)} B \leq B \end{displaymath} \end{proof} -The following lemma proves that the relaxation $L_\mathcal{N}$ that we are -using has a bounded approximation ratio to the value function $V$. For -readibility, the proof is postponed to section~\ref{sec:relaxation}. - -\begin{lemma}\label{lemma:relaxation} - We have: - \begin{displaymath} - OPT(L_\mathcal{N}, B) \leq \frac{1}{C_\mu}\big(2 OPT(V,\mathcal{N},B) - + \max_{i\in\mathcal{N}}V(i)\big) - \end{displaymath} -\end{lemma} - -Let us recall here the following lemma from \cite{chen} which we use in the -proof of lemma~\ref{lemma:approx}. This lemma shows, as mentioned above, that -taking the maximum between the greedy solution and the point of maximum value -gives a $\frac{5e}{e-1}$ approximation ratio. -\begin{lemma}\label{lemma:greedy-bound} -The following inequality holds: -\begin{displaymath} - OPT(V,\mathcal{N},B) \leq \frac{5e}{e-1}\max\big( V(S_M), V(i^*)\big) -\end{displaymath} -\end{lemma} - \begin{lemma}\label{lemma:approx} - Let us denote by $S_M$ the set returned by the mechanism. Let us also - write: + Let $S^*$ be the set allocated by the mechanism. Let us write: \begin{displaymath} C_{\textrm{max}} = \max\left(1+C,\frac{e}{e-1}\left( 3 + \frac{12e}{C\cdot C_\mu(e-1) -5e +1}\right)\right) @@ -224,7 +224,7 @@ The following inequality holds: Then: \begin{displaymath} OPT(V, \mathcal{N}, B) \leq - C_\text{max}\cdot V(S_M) + C_\text{max}\cdot V(S^*) \end{displaymath} \end{lemma} @@ -247,24 +247,25 @@ The following inequality holds: V(i^*) \geq \frac{1}{C+1} OPT(V,\mathcal{N}, B) \end{displaymath} - If the condition of the algorithm does not hold: + If the condition of the algorithm does not hold, by applying lemmas + \ref{lemma:relaxation} and \ref{lemma:greedy-bound}: \begin{align*} V(i^*) & \leq \frac{1}{C}L(x^*) \leq \frac{1}{C\cdot C_\mu} \big(2 OPT(V,\mathcal{N}, B) + V(i^*)\big)\\ - & \leq \frac{1}{C\cdot C_\mu}\left(\frac{2e}{e-1}\big(3 V(S_M) + & \leq \frac{1}{C\cdot C_\mu}\left(\frac{2e}{e-1}\big(3 V(S_G) + 2 V(i^*)\big) + V(i^*)\right) \end{align*} Thus: \begin{align*} - V(i^*) \leq \frac{6e}{C\cdot C_\mu(e-1)- 5e + 1} V(S_M) + V(i^*) \leq \frac{6e}{C\cdot C_\mu(e-1)- 5e + 1} V(S_G) \end{align*} Finally, using again lemma~\ref{lemma:greedy-bound}, we get: \begin{displaymath} OPT(V, \mathcal{N}, B) \leq \frac{e}{e-1}\left( 3 + \frac{12e}{C\cdot - C_\mu(e-1) -5e +1}\right) V(S_M) + C_\mu(e-1) -5e +1}\right) V(S_G) \end{displaymath} \end{proof} @@ -280,14 +281,11 @@ This equation has two solutions. Only one of those is such that: which is needed in the proof of the previous lemma. Computing this solution, gives the result of the theorem. -\subsection{Relaxations of the value function}\label{sec:relaxation} +\subsection{An approximation ratio for $L_\mathcal{N}$}\label{sec:relaxation} -To prove lemma~\ref{lemma:relaxation}, we use a general method called pipage -rounding introduced in \cite{pipage}. This method relies on \emph{piping} two -relaxations of the value function, one being the \emph{multilinear extension} -introduced below, the other one being the relaxation $L_\mathcal{N}$ already -introduced in our mechanism. At each position of the pipe, we show that we keep -a bounded approximation ratio to the original value function. +Our main contribution is lemma~\ref{lemma:relaxation} which gives an +approximation ratio for $L_\mathcal{N}$ and is key in the proof of +theorem~\ref{thm:main}. We say that $R_\mathcal{N}:[0,1]^{|\mathcal{N}|}\rightarrow\reals$ is a relaxation of the value function $V$ over $\mathcal{N}$ if it coincides with @@ -309,6 +307,7 @@ The optimisation problem becomes: OPT(R_\mathcal{N}, B) = \max_{\lambda\in[0,1]^{|\mathcal{N}|}}\left\{R_\mathcal{N}(\lambda)\,|\, c(\lambda)\leq B\right\} \end{displaymath} + The relaxations we will consider here rely on defining a probability distribution over subsets of $\mathcal{N}$. @@ -322,26 +321,29 @@ a subset of $\mathcal{N}$ at random by deciding independently for each point to include it in the set with probability $\lambda_i$ (and to exclude it with probability $1-\lambda_i$). -We will consider two relaxations of the value function $V$ over $\mathcal{N}$: -\begin{itemize} - \item the \emph{multi-linear extension} of $V$: - \begin{align*} - F_\mathcal{N}(\lambda) - & = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\big[\log\det A(S)\big]\\ - & = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) V(S)\\ - & = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) \log\det A(S)\\ - \end{align*} - \item the \emph{concave relaxation} of $V$: - \begin{align*} - L_{\mathcal{N}}(\lambda) - & = \log\det \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\big[A(S)\big]\\ - & = \log\det\left(\sum_{S\subseteq N} - P_\mathcal{N}^\lambda(S)A(S)\right)\\ - & = \log\det\left(I_d + \mu\sum_{i\in\mathcal{N}} - \lambda_ix_i\T{x_i}\right)\\ - & \defeq \log\det \tilde{A}(\lambda) - \end{align*} -\end{itemize} +For readability, let us define $A(S) = I_d + \T{X_S}X_S$, so that the value +function is $V(S) = \log\det A(S)$. + +We have already defined the function $L_\mathcal{N}$ which is a relaxation of +the value function. Note that it is possible to express it in terms of the +probabilities $P_\mathcal{N}$: +\begin{align*} + L_{\mathcal{N}}(\lambda) + & = \log\det \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\big[A(S)\big]\\ + & = \log\det\left(\sum_{S\subseteq N} + P_\mathcal{N}^\lambda(S)A(S)\right)\\ + & = \log\det\left(I_d + \mu\sum_{i\in\mathcal{N}} + \lambda_ix_i\T{x_i}\right)\\ + & \defeq \log\det \tilde{A}(\lambda) +\end{align*} + +We will also use the \emph{multi-linear extension}: +\begin{align*} + F_\mathcal{N}(\lambda) + & = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\big[\log\det A(S)\big]\\ + & = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) V(S)\\ + & = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) \log\det A(S)\\ +\end{align*} \begin{lemma}\label{lemma:concave} The \emph{concave relaxation} $L_\mathcal{N}$ is concave\footnote{Hence @@ -358,12 +360,13 @@ We will consider two relaxations of the value function $V$ over $\mathcal{N}$: \end{multline*} \end{proof} -It has been observed already that the multilinear extension presents the -cross-convexity property: it is convex along any direction $e_i-e_j$ where $e_i$ -and $e_j$ are two elements of the canonical basis. This property allows to -trade between two fractional components of a point without diminishing the -value of the relaxation. The following lemma follows from the same idea but -also ensures that the points remain feasible during the trade. +It has been observed already (see for example \cite{dughmi}) that the +multilinear extension presents the cross-convexity property: it is convex along +any direction $e_i-e_j$ where $e_i$ and $e_j$ are two elements of the standard +basis. This property allows to trade between two fractional components of +a point without diminishing the value of the relaxation. The following lemma +follows is inspired by a similar lemma from \cite{dughmi} but also ensures that +the points remain feasible during the trade. \begin{lemma}[Rounding]\label{lemma:rounding} For any feasible $\lambda\in[0,1]^{|\mathcal{N}|}$, there exists a feasible @@ -563,7 +566,13 @@ also ensures that the points remain feasible during the trade. \end{align*} \end{proof} -We can now prove lemma~\ref{lemma:relaxation} from previous section. +\begin{lemma}\label{lemma:relaxation} + We have: + \begin{displaymath} + OPT(L_\mathcal{N}, B) \leq \frac{1}{C_\mu}\big(2 OPT(V,\mathcal{N},B) + + \max_{i\in\mathcal{N}}V(i)\big) + \end{displaymath} +\end{lemma} \begin{proof} Let us consider a feasible point $\lambda^*\in[0,1]^{|\mathcal{N}|}$ such that $L_\mathcal{N}(\lambda^*) |
