diff options
| -rw-r--r-- | main.tex | 139 |
1 files changed, 72 insertions, 67 deletions
@@ -3,17 +3,17 @@ In this section we present a mechanism for \EDP. Previous works on maximizing submodular functions \cite{nemhauser, sviridenko-submodular} and designing -auction mechanisms for submodular utility functions \cite{singer-mechanisms, -chen, singer-influence} rely on a greedy heuristic. In this heuristic, elements +auction mechanisms for submodular value functions \cite{singer-mechanisms, +chen, singer-influence} rely on a greedy algorithm. In this algorithm, elements are added to the solution set according to the following greedy selection rule. -Assume that $S\subseteq \mathcal{N}$ is the solution set constructed thus far; then, the next element to be -included in $S$ is: +Assume that $S\subseteq \mathcal{N}$ is the solution set constructed thus far; then, the next element $i$ to be +included in $S$ is the one with the highest \emph{marginal-value-per-cost}: \begin{align} i = \argmax_{j\in\mathcal{N}\setminus S}\frac{V(S\cup\{i\}) - V(S)}{c_i}\label{greedy} \end{align} -This process terminates when no more items can be added to $S$ using \eqref{greedy} under budget $B$. This is the generalization of the \emph{value-per-cost} ratio used in the greedy -heuristic for \textsc{Knapsack}. However, in contrast to \textsc{Knapsack}, for general submodular -functions, the marginal value of an element depends on the set to which it is added. +This process terminates when no more items can be added to $S$ using \eqref{greedy} under budget $B$. This is the generalization of the \emph{value-per-cost} ratio used in the greedy approximation +algorithm for \textsc{Knapsack}. However, in contrast to \textsc{Knapsack}, for general submodular +functions, the marginal value of an element in \eqref{greedy} depends on the set to which it the element is added. Unfortunately, even for the full information case, the greedy heuristic gives an unbounded approximation ratio. Let $i^* @@ -22,7 +22,7 @@ set). It has been noted by Khuller \emph{et al.}~\cite{khuller} that for the max coverage problem, taking the maximum between the greedy solution and $V(i^*$) gives a $\frac{2e}{e-1}$ approximation ratio. In the general case, we have the following result from Singer \cite{singer-influence} which follows from -Chen \emph{et al.}~\cite{chen}: \stratis{Is it Singer or Chen?} +Chen \emph{et al.}~\cite{chen}: \stratis{Is it Singer or Chen? Also, we need to introduce $V(i)$ somewhere...} \begin{lemma}[Singer \cite{singer-influence}]\label{lemma:greedy-bound} Let $S_G$ be the set computed by the greedy heuristic and define $i^*$: @@ -38,12 +38,12 @@ OPT(V,\mathcal{N},B) \leq \frac{e}{e-1}\big( 3 V(S_G) + 2 V(i^*)\big) Hence, taking the maximum between $V(S_G)$ and $V(i^*)$ yields an approximation ratio of $\frac{5e}{e-1}$. However, Singer \cite{singer-influence} notes that this approach breaks incentive compatibility and therefore cannot be directly -applied to the strategic case. Indeed, assume that we are in a situation where -the mechanism has allocated to the greedy set ($V(S_G) \geq V(i^*)$). If an -agent $i$ from the greedy set reduces her cost, it could happen that $V(S_G)$ -becomes smaller than $V(i^*)$. In this case the mechanism will allocate to -$i^*$ and $i$ will be out of the allocated set. This violates the monotonicity of -the allocation function and hence also truthfulness, by Myerson's theorem. +applied to the strategic case. Indeed, suppose this allocation mechanism is used, and consider a case where +the allocates the greedy set ($V(S_G) \geq V(i^*)$). If an +agent $i$ from $S_G$ reduces her cost, it could happen that $V(S_G)$ +becomes smaller than $V(i^*)$. In this case the mechanism's allocation becomes +$i^*$, and $i$ is excluded from the allocated set. This violates the monotonicity of +the allocation function and hence also truthfulness, by Myerson's Theorem. To address this, Chen \emph{et al.}~\cite{chen} %and Singer~\cite{singer-influence}, introduce a third quantity: if $V(i^*)$ is larger than this quantity, the @@ -67,14 +67,14 @@ problems, Chen et al.~\cite{chen} %for \textsc{Knapsack} and Singer propose comparing $V(i^*)$ to %$OPT(R_{\mathcal{N}\setminus\{i\}}, B)$, where $R_\mathcal{N}$ denotes the optimal value of a \emph{fractional relaxation} of the function $V$ over the set -$\mathcal{N}$. A fuction $R_\mathcal{N}:[0, 1]^{|\mathcal{N}|}\to\reals_+$ defined on the hypercube $[0, 1]^{|\mathcal{N}|}$ is a fractional relaxation of $V$ -over the set $\mathcal{N}$ if %(a) $R_\mathcal{N}$ is a function defined on the hypercube $[0, 1]^{|\mathcal{N}|}$ and (b) +$\mathcal{N}$. A fuction $R_\mathcal{N}:[0, 1]^{n}\to\reals_+$ defined on the hypercube $[0, 1]^{n}$ is a fractional relaxation of $V$ +over the set $\mathcal{N}$ if %(a) $R_\mathcal{N}$ is a function defined on the hypercube $[0, 1]^{n}$ and (b) $R_\mathcal{N}(\mathbf{1}_S) = V(S)$ for all $S\subseteq\mathcal{N}$, where $\mathbf{1}_S$ denotes the indicator vector of $S$. The optimization program \eqref{eq:non-strategic} extends naturally to such relaxations: \begin{align} - OPT(R_\mathcal{N}, B) = \argmax_{\lambda\in[0, 1]^{|\mathcal{N}|}} - \left\{R_\mathcal{N}(\lambda) \mid \sum_{i=1}^{|\mathcal{N}|} \lambda_i c_i + OPT(R_\mathcal{N}, B) = \argmax_{\lambda\in[0, 1]^{n}} + \left\{R_\mathcal{N}(\lambda) \mid \sum_{i=1}^{n} \lambda_i c_i \leq B\right\}\label{relax} \end{align} To attain truthful constant approximation mechanism for \textsc{Knapsack}, Chen \emph{et al.}~\cite{chen} substitute $OPT(R_\mathcal{N},B)$ for $OPT(V,\mathcal{N},B)$ in the above program, where $R_{\mathcal{N}}$ are relaxations of the \textsc{Knapsack} objectives. @@ -84,32 +84,34 @@ A relaxation which is commonly used, due to its well-behaved properties in the context of maximizing submodular functions, is the \emph{multi-linear} extension: \begin{equation}\label{eq:multilinear} - F_\mathcal{N}^V(\lambda) + F_\mathcal{N}(\lambda) = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[V(S)\right] = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) V(S) \end{equation} $P_\mathcal{N}^\lambda(S)$ is the probability of choosing the set $S$ if -we decide for each element in $\mathcal{N}$ to pick it with probability -$\lambda_i$ or to reject it with probability $1-\lambda_i$: +we select each element $i$ in $\mathcal{N}$ independently with probability +$\lambda_i$: %or to reject it with probability $1-\lambda_i$: \begin{displaymath} P_\mathcal{N}^\lambda(S) = \prod_{i\in S} \lambda_i - \prod_{i\in\mathcal{N}\setminus S} 1 - \lambda_i + \prod_{i\in\mathcal{N}\setminus S}( 1 - \lambda_i) \end{displaymath} -For \textsc{Knapsack}, Chen et al. \cite{chen} directly use the multi-linear extension, as \eqref{relax} can be solve in polynomial time for the \textsc{Knapsack} objective. For \textsc{Coverage} however, the +For \textsc{Knapsack}, Chen et al. \cite{chen} directly use the multi-linear extension, as in this case \eqref{relax} can be solved in polynomial time. % for the \textsc{Knapsack} objective. + For \textsc{Coverage} however the optimal value of the multi-linear extension cannot be computed in polynomial time. Thus, Singer \cite{singer-influence} introduces a second relaxation of -the value function which can be proven to be close to the multi-linear -extension, by using the \emph{pipage rounding} method of Ageev and Sviridenko +the value function. The latter is proved to be close to the multi-linear +extension using the \emph{pipage rounding} method of Ageev and Sviridenko \cite{pipage}. -Here, following these ideas, we introduce another relaxation of the value -function of \EDP. Note that in our case, the multi-linear extension can be written: +Here, following these ideas, we introduce a relaxation specifically tailored to the value +function of \EDP. The multi-linear extension of \eqref{modified} can be written: \begin{displaymath} - CF_\mathcal{N}(\lambda) = \mathbb{E}_{S\sim + F_\mathcal{N}(\lambda) = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[\log\det A(S) \right] \;\text{with}\; A(S) = I_d + \sum_{i\in S} x_i\T{x_i} \end{displaymath} -Our relaxation follows naturally by swapping the expectation and the $\log\det$ +As in the case of \textsc{Coverage}, \eqref{relax} is not a convex optimization problem, and is not easy to solve directly. %As in ~\cite{singer-influence}, +We thus introduce an additional relaxation $L_{\mathcal{N}}$. Our relaxation follows naturally by swapping the expectation and the $\log\det$ in the above formula: \begin{align}\label{eq:concave} L_\mathcal{N}(\lambda) & \defeq \log\det\left(\mathbb{E}_{S\sim @@ -124,18 +126,18 @@ to find the maximum of $L_\mathcal{N}$ to any precision $\varepsilon$ in a number of iterations $O(\log\log\varepsilon^{-1})$. The main challenge will be to prove that $OPT(L_\mathcal{N}, B)$ is close to $V(S_G)$. To do so, our main technical contribution is to prove that $L_\mathcal{N}$ has a bounded -approximation ratio to the value function $V$ (lemma~\ref{lemma:relaxation}). +approximation ratio to the value function $V$ (Lemma~\ref{lemma:relaxation}). -The mechanism for \EDP{} is presented in algorithm \ref{mechanism}. +The resulting mechanism for \EDP{} is presented in Algorithm~\ref{mechanism}. The constant $C$ is an absolute constant, that we determine in Section~\ref{...} (See \eqref{...}). \begin{algorithm} \caption{Mechanism for \EDP{}}\label{mechanism} \begin{algorithmic}[1] \State $i^* \gets \argmax_{j\in\mathcal{N}}V(j)$ - \State $x^* \gets \argmax_{x\in[0,1]^{|\mathcal{N}|}} \{L_{\mathcal{N}\setminus\{i^*\}}(x) - \mid c(x)\leq B\}$ + \State $x^* \gets \argmax_{x\in[0,1]^{n}} \{L_{\mathcal{N}\setminus\{i^*\}}(x) + \mid c(x)\leq B\}$, with precision $\epsilon>0$ \Statex - \If{$L(x^*) < CV(i^*)$} - \State \textbf{return} $\{i^*\}$ + \If{$L_{\mathcal{N}\setminus\{i^*\}}(x^*) < C \cdot V(i^*)$} \label{c} + \State \textbf{return} $\{i^*\}$ \Else \State $i \gets \argmax_{1\leq j\leq n}\frac{V(j)}{c_j}$ \State $S_G \gets \emptyset$ @@ -149,11 +151,11 @@ The mechanism for \EDP{} is presented in algorithm \ref{mechanism}. \end{algorithmic} \end{algorithm} -We can now state the main result of this section: +We can now state our main result: \begin{theorem}\label{thm:main} - The mechanism described in algorithm~\ref{mechanism} is truthful, + There exists an absolute constant $C$ such that the mechanism described in Algorithm~\ref{mechanism} is truthful, individiually rational and budget feasible. Furthermore, for any $\varepsilon>0$, the mechanism - has a complexity $O\left(\text{poly}(|\mathcal{N}|, d, + has complexity $O\left(\text{poly}(n, d, \log\log \varepsilon^{-1})\right)$ and returns a set $S^*$ such that: \begin{align*} OPT(V,\mathcal{N}, B) @@ -164,7 +166,7 @@ We can now state the main result of this section: \end{theorem} %\stratis{Add lowerbound here too.} - +Note that this implies we construct a poly-time mechanism with accuracy arbitrarily close to 19.68, by taking $\varepsilon = \ldots$\stratis{fix me}. In addition, we prove the following lower bound. \begin{theorem}\label{thm:lowerbound} There is no $2$-approximate, truthful, budget feasible, individionally rational mechanism for EDP. @@ -174,10 +176,13 @@ There is no $2$-approximate, truthful, budget feasible, individionally rational Suppose, for contradiction, that such a mechanism exists. Consider two experiments with dimension $d=2$, such that $x_1 = e_1=[1 ,0]$, $x_2=e_2=[0,1]$ and $c_1=c_2=B/2+\epsilon$. Then, one of the two experiments, say, $x_1$, must be in the set selected by the mechanism, otherwise the ratio is unbounded, a contradiction. If $x_1$ lowers its value to $B/2-\epsilon$, by monotonicity it remains in the solution; by threshold payment, it is paid at least $B/2+\epsilon$. So $x_2$ is not included in the solution by budget feasibility and individual rationality: hence, the selected set attains a value $\log2$, while the optimal value is $2\log 2$. \end{proof} -\subsection{Proof of theorem~\ref{thm:main}} +\subsection{Proof of Theorem~\ref{thm:main}} +\stratis{individual rationality???} + -The proof of the properties of the mechanism is broken down into lemmas. -Monotonicity and budget feasibility follows from the analysis of Chen et al. +%The proof of the properties of the mechanism is broken down into lemmas. +We now present the proof of Theorem~\ref{thm:main}. +Monotonicity and budget feasibility follows from the analysis of Chen \emph{et al.}; we briefly restate the proofs for the sake of completeness. \cite{chen}. The proof of the approximation ratio is done in lemma~\ref{lemma:approx} and uses a bound on our concave relaxation $L_\mathcal{N}$ (lemma~\ref{lemma:relaxation}) which is our main technical @@ -189,20 +194,20 @@ The mechanism is monotone. \end{lemma} \begin{proof} - Suppose, for contradiction, that there exists an agent $i$ that has been - selected by the mechanism and that would not have been selected had she - reported a cost $c_i'\leq c_i$ (all the other costs staying the same). + Consider an agent $i$ with cost $c_i$ that is + selected by the mechanism, and suppose that she reports + a cost $c_i'\leq c_i$, all the other costs staying the same. - If $i\neq i^*$ and $i$ has been selected, then we are in the case where - $L_\mathcal{N}(x^*) \geq C V(i^*)$ and $i$ was included in the result set - by the greedy part of the mechanism. By reporting a cost $c_i'\leq c_i$, - using the submodularity of $V$, we see that $i$ will satisfy the greedy - selection rule: - \begin{displaymath} - i = \argmax_{j\in\mathcal{N}\setminus S} \frac{V(S\cup\{j\}) - - V(S)}{c_j} - \end{displaymath} - in an earlier iteration of the greedy heuristic. Let us denote by $S_i$ + If $i\neq i^*$, since $s_i(c_i,c_{-i})=1$, + $L_\mathcal{N}(x^*) \geq C V(i^*)$ and $i\in S_G$. + By reporting a cost $c_i'\leq c_i$, $i$ may be selected at an earlier iteration of the greedy heuristic. + %using the submodularity of $V$, we see that $i$ will satisfy the greedy + %selection rule: + %\begin{displaymath} + % i = \argmax_{j\in\mathcal{N}\setminus S} \frac{V(S\cup\{j\}) + % - V(S)}{c_j} + %\end{displaymath} + %in an earlier iteration of the greedy heuristic. Let us denote by $S_i$ (resp. $S_i'$) the set to which $i$ is added when reporting cost $c_i$ (resp. $c_i'$). We have $S_i'\subseteq S_i$. Moreover: \begin{align*} @@ -251,21 +256,21 @@ Hence, the total payment is bounded by: \begin{lemma}\label{lemma:complexity} For any $\varepsilon > 0$, the complexity of algorithm~\ref{mechanism} is - $O(\text{poly}(|\mathcal{N}|, d, \log\log \varepsilon^{-1}))$. + $O(\text{poly}(n, d, \log\log \varepsilon^{-1}))$. \end{lemma} \begin{proof} The value function $V$ \eqref{modified} can be computed in time - $O(\text{poly}(|\mathcal{N}|, d))$ and the mechanism only involves a linear + $O(\text{poly}(n, d))$ and the mechanism only involves a linear number of queries to the function $V$. The function $\log\det$ is concave and self-concordant (see \cite{boyd2004convex}), so for any $\varepsilon$, its maximum can be find to a precision $\varepsilon$ in a number of iterations of Newton's method $O(\log\log\varepsilon^{-1})$. Each iteration of Newton's method can be - done in time $O(\text{poly}(|\mathcal{N}|, d))$. Thus, line 2 of + done in time $O(\text{poly}(n, d))$. Thus, line 2 of algorithm~\ref{mechanism}, can be computed in time - $O(\text{poly}(|\mathcal{N}|, d, \log\log \varepsilon^{-1}))$. + $O(\text{poly}(n, d, \log\log \varepsilon^{-1}))$. \end{proof} Finally, we prove the approximation ratio of the mechanism. We use the @@ -386,7 +391,7 @@ referred to in the literature as \emph{cross-convexity} (see \emph{e.g.} - e_j)\big) \end{displaymath} where $e_i$ and $e_j$ are two vectors of the standard basis of -$\reals^{|\mathcal{N}|}$, then $\tilde{F}$ is convex. Hence its maximum over the interval: +$\reals^{n}$, then $\tilde{F}$ is convex. Hence its maximum over the interval: \begin{displaymath} I_\lambda = \Big[\max(-\lambda_i,\lambda_j-1), \min(1-\lambda_i, \lambda_j)\Big] \end{displaymath} @@ -397,11 +402,11 @@ The lemma below proves that we can similarly trade one fractional component for an other until one of them becomes integral \emph{while maintaining the feasibility of the point at which $F_\mathcal{N}$ is evaluated}. Here, by feasibility of a point $\lambda$, we mean that it satisfies the budget constraint $\sum_{1\leq -i\leq |\mathcal{N}|} \lambda_i c_i \leq B$. +i\leq n} \lambda_i c_i \leq B$. \begin{lemma}[Rounding]\label{lemma:rounding} - For any feasible $\lambda\in[0,1]^{|\mathcal{N}|}$, there exists a feasible - $\bar{\lambda}\in[0,1]^{|\mathcal{N}|}$ such that at most one of its component is + For any feasible $\lambda\in[0,1]^{n}$, there exists a feasible + $\bar{\lambda}\in[0,1]^{n}$ such that at most one of its component is fractional, that is, lies in $(0,1)$ and: \begin{displaymath} F_{\mathcal{N}}(\lambda)\leq F_{\mathcal{N}}(\bar{\lambda}) @@ -458,7 +463,7 @@ i\leq |\mathcal{N}|} \lambda_i c_i \leq B$. \begin{lemma}\label{lemma:relaxation-ratio} The following inequality holds: \begin{displaymath} - \forall\lambda\in[0,1]^{|\mathcal{N}|},\; + \forall\lambda\in[0,1]^{n},\; \frac{1}{2} \,L_\mathcal{N}(\lambda)\leq F_\mathcal{N}(\lambda)\leq L_{\mathcal{N}}(\lambda) @@ -490,10 +495,10 @@ i\leq |\mathcal{N}|} \lambda_i c_i \leq B$. Finally, if the minimum is attained on a face of the hypercube (a face is defined as a subset of the hypercube where one of the variable is fixed to 0 or 1), without loss of generality, we can assume that the minimum is - attained on the face where the $|\mathcal{N}|$-th variable has been fixed + attained on the face where the $n$-th variable has been fixed to 0 or 1. Then, either the minimum is attained at a point interior to the face or on a boundary of the face. In the first case, relation - \eqref{eq:lhopital} still characterizes the minimum for $i< |\mathcal{N}|$. + \eqref{eq:lhopital} still characterizes the minimum for $i< n$. In the second case, by repeating the argument again by induction, we see that all is left to do is to show that the bound holds for the vertices of the cube (the faces of dimension 1). The vertices are exactly the binary @@ -605,7 +610,7 @@ i\leq |\mathcal{N}|} \lambda_i c_i \leq B$. \end{proof} \begin{proof} - Let us consider a feasible point $\lambda^*\in[0,1]^{|\mathcal{N}|}$ such that $L_\mathcal{N}(\lambda^*) + Let us consider a feasible point $\lambda^*\in[0,1]^{n}$ such that $L_\mathcal{N}(\lambda^*) = OPT(L_\mathcal{N}, B)$. By applying lemma~\ref{lemma:relaxation-ratio} and lemma~\ref{lemma:rounding} we get a feasible point $\bar{\lambda}$ with at most one fractional component such that: |
