diff options
| author | Thibaut Horel <thibaut.horel@gmail.com> | 2013-02-10 16:22:34 -0800 |
|---|---|---|
| committer | Thibaut Horel <thibaut.horel@gmail.com> | 2013-02-10 16:22:34 -0800 |
| commit | 8b6da319e9ffa6ed92b0b34a0ef1771f16b18092 (patch) | |
| tree | 38081bc5c86ac9ab5dc5247a7d3fd9cf3fc966a0 | |
| parent | b97de83531a0d2034b7b62cca7403b7955d038c7 (diff) | |
| download | recommendation-8b6da319e9ffa6ed92b0b34a0ef1771f16b18092.tar.gz | |
Rewriting of main section up to the beginning of the proof
| -rw-r--r-- | main.tex | 351 |
1 files changed, 152 insertions, 199 deletions
@@ -1,210 +1,139 @@ \label{sec:main} -%\subsection{Truthful, Constant Approximation Mechanism} -In this section we present a mechanism for \EDP. +In this section we present our mechanism for \EDP. Prior approaches to budget +feasible mechanisms build upon the full information case which we discuss +first. -\paragraph{Prior approach to submodular optimization problems} - Previous works~\cite{nemhauser, sviridenko-submodular,singer-mechanisms,chen, singer-influence} -% -%on maximizing submodular functions \cite{nemhauser, sviridenko-submodular} and designing -%auction mechanisms for submodular value functions \cite{singer-mechanisms, -%chen, singer-influence} -% -rely on a greedy algorithm. In this algorithm, elements -are added to the solution set according to the following greedy selection rule. -Assume that $S\subseteq \mathcal{N}$ is the solution set constructed thus far; then, the next element $i$ to be -included in $S$ is the one with the highest \emph{marginal-value-per-cost}: +\paragraph{Full information submodular maximization} + +The best known approximation ratio $\frac{e}{e-1}$ for maximizing a submodular +set function under a cardinality constraint is attained by using a greedy +algorithm~\cite{nemhauser}. In this algorithm, elements are added to the +solution set according to the following greedy selection rule. Assume that +$S\subseteq \mathcal{N}$ is the solution set constructed thus far; then, the +next element $i$ to be included in $S$ is the one with the highest +\emph{marginal-value}: $V(S\cup\{i\}) - V(S)$. This process terminates when the +cardinality of $S$ reaches the cardinality limit. + +For submodular maximization under a budget constraint \eqref{eq:non-strategic}, +the greedy algorithm can be generalized: if $S\subset\mathcal{N}$ is the set +constructed thus far, the element $i$ to be included is the one which maximizes +the \emph{marginal-value-per-cost}: \begin{align} i = \argmax_{j\in\mathcal{N}\setminus S}\frac{V(S\cup\{i\}) - V(S)}{c_i}\label{greedy} \end{align} -This process terminates when no more items can be added to $S$ using \eqref{greedy} under budget $B$. This is the generalization of the \emph{value-per-cost} ratio used in the greedy approximation -algorithm for \textsc{Knapsack}. However, in contrast to \textsc{Knapsack}, for general submodular -functions, the marginal value of an element in \eqref{greedy} depends on the set to which the element is added. -Similarly, the value of an element depends on the set in which it is -considered. -%However, in the following, the value of an element $i$ will refer to -%its value as a singleton set: $V(\{i\})$. If there is no ambiguity, we will write -%$V(i)$ instead of $V(\{i\})$. -Unfortunately, even for the full information case, the greedy algorithm gives an -unbounded approximation ratio. Instead, -\junk{ - Let $i^* -= \argmax_{i\in\mathcal{N}} V(i)$ be the element of maximum value. -% (as a singleton set). -%It has been noted by Khuller \emph{et al.}~\cite{khuller} that -For the maximum -coverage problem, taking the maximum between the greedy solution and $V(i^*)$ (shorthand for $V(\{i\})$) -gives a $\frac{2e}{e-1}$ approximation ratio ~\cite{khuller}. In the general sub modular case, -\junk{we have the -following result from Singer \cite{singer-influence} which follows from -Chen \emph{et al.}~\cite{chen}: \stratis{Is it Singer or Chen? Also, we need to introduce $V(i)$ somewhere...} -} -} +This is repeated until the sum of costs of elements in $S$ reaches the budget +constraint $B$. Unfortunately, the greedy algorithm alone has an unbounded +approximation ratio~\cite{khuller}. However, let $i^* += \argmax_{i\in\mathcal{N}} V(\{i\})$ be the element of maximum value, and +$S_G$ the set computed by the greedy algorithm, taking the maximum between +$V(S_G)$ and $V(i^*)$ yields a bounded approximation ratio. Formally, the +following lemma holds. \begin{lemma}~\cite{chen}\label{lemma:greedy-bound} Let $S_G$ be the set computed by the greedy algorithm and let -%define $i^*$: -%\begin{displaymath} -$i^* = \argmax_{i\in\mathcal{N}} V(i).$ -%\end{displaymath} -We have: -% the following inequality holds: +$i^* = \argmax_{i\in\mathcal{N}} V(i).$ We have: \begin{displaymath} OPT \leq \frac{e}{e-1}\big( 3 V(S_G) + 2 V(i^*)\big). \end{displaymath} \end{lemma} -Thus, -taking the maximum between $V(S_G)$ and $V(i^*)$ yields an approximation -ratio of $\frac{5e}{e-1}$. However, -this approach breaks incentive compatibility and therefore cannot be directly -applied to the strategic case~\cite{singer-influence}. -\junk{ -Indeed, suppose this allocation mechanism is used, and consider a case where -the allocates the greedy set ($V(S_G) \geq V(i^*)$). If an -agent $i$ from $S_G$ reduces her cost, it could happen that $V(S_G)$ -becomes smaller than $V(i^*)$. In this case the mechanism's allocation becomes -$i^*$, and $i$ is excluded from the allocated set. This violates the monotonicity of -the allocation function and hence also truthfulness, by Myerson's Theorem. -} +This lemma immediately proves an approximation ratio of $\frac{5e}{e-1}$ for +the afore-mentioned approach. A more sophisticated +algorithm~\cite{sviridenko-submodular} can attain the optimal $\frac{e}{e-1}$ +approximation ratio -For the strategic case, -\begin{itemize} -\item -When the underlying -full information problem \eqref{eq:non-strategic} can be solved in -polynomial time, Chen \emph{et al.}~\cite{chen} prove that -%$OPT_{-i^*}$, the optimal solution to \eqref{eq:non-strategic} when $i^*$ is excluded from set $\mathcal{N}$, can play the %role of this quantity: that is, -allocating to $i^*$ when -$V(i^*) \geq C\cdot OPT_{-i^*}$ (for some constant $C$) -and to $S_G$ otherwise yields a 8.34-approximation mechanism. However, this is not a poly-time mechanism when the underlying problem is NP hard (unless P=NP), as is the case for \EDP. - -\item -For NP-hard -problems, consider -the optimal value of a \emph{fractional relaxation} of the function $V$ over the set -$\mathcal{N}$. A function $R:[0, 1]^{n}\to\reals_+$ defined on the hypercube $[0, 1]^{n}$ is a fractional relaxation of $V$ -over the set $\mathcal{N}$ if %(a) $R$ is a function defined on the hypercube $[0, 1]^{n}$ and (b) - $R(\id_S) = V(S)$ for all $S\subseteq\mathcal{N}$, where -$\id_S$ denotes the indicator vector of $S$. The optimization program -\eqref{eq:non-strategic} extends naturally to such relaxations: -\begin{align} - OPT' = \argmax_{\lambda\in[0, 1]^{n}} - \left\{R(\lambda) \mid \sum_{i=1}^{n} \lambda_i c_i - \leq B\right\}\label{relax} -\end{align} -Substituting $OPT'_{-i^*}$ for $OPT_{-i^*}$ like before works for specific problems like \textsc{Knapsack}~\cite{chen} and -\textsc{Coverage}~\cite{singer-influence}. For other instances of submodular function, this overall technology -has to be applied and extended. -\end{itemize} - - -\junk{ -To address this, Chen \emph{et al.}~\cite{chen} %and Singer~\cite{singer-influence}, - introduce a third quantity: if $V(i^*)$ is larger than this quantity, the -mechanism allocates to $i^*$, otherwise it allocates to the greedy set $S_G$. -This quantity must be provably close to $V(S_G)$, to keep a bounded -approximation ratio, while maintaining the monotonicity of the allocation -algorithm. Furthermore, it must be computable in polynomial time to keep an -overall polynomial complexity for the allocation algorithm. +\fussy +\paragraph{Strategic case} +We could design the allocation function of our mechanism by following the full +information approach: allocating to agent $i^*$ when $V(i^*)\geq V(S_G)$ and to +$S_G$ otherwise. However, Singer~\cite{singer-influence} notes that this +allocation function is not monotonous, and thus breaks incentive compatibility +by Myerson's theorem (Theorem~\ref{thm:myerson}). -\sloppy -When the underlying -full information problem \eqref{eq:non-strategic} can be solved in -polynomial time, Chen et al. \cite{chen} prove that $OPT_{-i^*}$, the optimal solution to \eqref{eq:non-strategic} when $i^*$ is excluded from set $\mathcal{N}$, can play the role of this quantity: that is, allocating to $i^*$ when -$V(i^*) \geq C\cdot OPT_{-i^*}$ (for some constant $C$) -and to $S_G$ otherwise yields a 8.34-approximation mechanism. However, this is not a poly-time mechanism when the underlying problem is NP hard (unless P=NP), as is the case for \EDP. +Let us denote by $OPT_{-i^*}$ the optimal value achievable in the +full-information case after removing $i^*$ from the set $\mathcal{N}$: +\begin{equation} + OPT_{-i^*} \defeq \max_{S\subseteq\mathcal{N}\setminus\{i^*\}} \Big\{V(S) \;\Big| \; + \sum_{i\in S}c_i\leq B\Big\} +\end{equation} +Singer~\cite{singer-mechanisms} and Chen \emph{et al.}~\cite{chen} prove that +allocating to $i^*$ when $V(i^*)\geq C.OPT_{-i^*}$ (for some constant $C$) and +to $S_G$ otherwise yields a 8.34-approximation mechanism. However, +$OPT_{-i^*}$, the solution of the full-information case, cannot be computed in +poly-time when the underlying problem is NP-hard (unless P=NP), as is the case +for \EDP{}. +Instead, Singer~\cite{singer-mechanisms} and Chen \emph{et al.}~\cite{chen} +suggest to replace $OPT_{-i^*}$ by a quantity $L^*$ satisfying the following +properties: +\begin{itemize} + \item $L^*$ must not depend on agent $i^*$'s reported + cost and must be monotonous: it can only increase when agents decrease + their costs. This is to ensure the monotonicity of the allocation function. + \item $L^*$ must be close to $OPT_{-i^*}$ to maintain a bounded + approximation ratio. + \item $L^*$ must be computable in polynomial time. +\end{itemize} -\fussy -For NP-hard -problems, Chen et al.~\cite{chen} %for \textsc{Knapsack} and Singer -%\cite{singer-influence} for \textsc{Coverage} instead -propose comparing -$V(i^*)$ to %$OPT(R_{\mathcal{N}\setminus\{i\}}, B)$, where $R$ denotes -the optimal value of a \emph{fractional relaxation} of the function $V$ over the set -$\mathcal{N}$. A function $R:[0, 1]^{n}\to\reals_+$ defined on the hypercube $[0, 1]^{n}$ is a fractional relaxation of $V$ -over the set $\mathcal{N}$ if %(a) $R$ is a function defined on the hypercube $[0, 1]^{n}$ and (b) - $R(\id_S) = V(S)$ for all $S\subseteq\mathcal{N}$, where -$\id_S$ denotes the indicator vector of $S$. The optimization program +Such a quantity $L^*$ can be found by considering relaxations of the +optimization problem. A function $R:[0, 1]^{n}\to\reals_+$ defined on the +hypercube $[0, 1]^{n}$ is a \emph{fractional relaxation} of $V$ over the set +$\mathcal{N}$ if $R(\id_S) = V(S)$ for all $S\subseteq\mathcal{N}$, where $\id_S$ +denotes the indicator vector of $S$. The optimization program \eqref{eq:non-strategic} extends naturally to such relaxations: \begin{align} OPT' = \argmax_{\lambda\in[0, 1]^{n}} \left\{R(\lambda) \mid \sum_{i=1}^{n} \lambda_i c_i \leq B\right\}\label{relax} \end{align} -To attain truthful constant approximation mechanism for \textsc{Knapsack}, Chen \emph{et al.}~\cite{chen} substitute $OPT'_{-i^*}$ for $OPT_{-i^*}$ in the above program, where $R$ is a relaxation of the \textsc{Knapsack} objective. -Similarly, Singer~\cite{singer-influence} follows the same approach to obtain a mechanism for \textsc{Coverage}. -} -\paragraph{Our approach} -We build on~\cite{chen,singer-influence}. -We introduce a relaxation specifically tailored to the value -function of \EDP. -$P_\mathcal{N}^\lambda(S)$ is the probability of choosing the set $S$ if -we select each element $i$ in $\mathcal{N}$ independently with probability -$\lambda_i$: %or to reject it with probability $1-\lambda_i$: -\begin{displaymath} - P_\mathcal{N}^\lambda(S) = \prod_{i\in S} \lambda_i - \prod_{i\in\mathcal{N}\setminus S}( 1 - \lambda_i) -\end{displaymath} -Consider the general \emph{multi-linear} -extension: -\begin{equation}\label{eq:multilinear} - F(\lambda) - = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[V(S)\right] - = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) V(S) -\end{equation} -\junk{ +In \cite{singer-mechanisms} and \cite{chen}, using $L^* = OPT'_{-i^*}$, when +$R$ is the \emph{multilinear extension} of $V$ (see +Section~\ref{sec:relaxation}) yields an constant-approximation mechanism for +\textsc{Knapsack}. -$P_\mathcal{N}^\lambda(S)$ is the probability of choosing the set $S$ if -we select each element $i$ in $\mathcal{N}$ independently with probability -$\lambda_i$: %or to reject it with probability $1-\lambda_i$: -\begin{displaymath} - P_\mathcal{N}^\lambda(S) = \prod_{i\in S} \lambda_i - \prod_{i\in\mathcal{N}\setminus S}( 1 - \lambda_i) -\end{displaymath} -For \textsc{Knapsack}, Chen et al. \cite{chen} directly use the multi-linear extension, as in this case \eqref{relax} can be solved in polynomial time. % for the \textsc{Knapsack} objective. - For \textsc{Coverage} however the -optimal value of the multi-linear extension cannot be computed in polynomial -time. Thus, Singer \cite{singer-influence} introduces a second relaxation of -the value function. The latter is proved to be close to the multi-linear -extension using the \emph{pipage rounding} method of Ageev and Sviridenko -\cite{pipage}. +In \cite{singer-influence}, the optimization program \eqref{relax} +cannot be solved efficiently when $R$ is the multilinear extension of $V$. +Consequently, Singer introduces a second relaxation which he relates to the +multilinear extension through the \emph{pipage rounding} framework of Ageev and +Sviridenko~\cite{pipage}. -Here, following these ideas, we introduce a relaxation specifically tailored to the value -function of \EDP. -} -For \EDP\ the multi-linear extension -%of \eqref{modified} -can be written: +\paragraph{Our approach} +Following Chen \emph{et al.}~\cite{chen} and Singer~\cite{singer-influence}, +we in turn introduce a relaxation $L$ specifically tailored to the value +function of \EDP: \begin{displaymath} - F(\lambda) = \mathbb{E}_{S\sim - P_\mathcal{N}^\lambda}\Big[\log\det \big(I_d + \sum_{i\in S} x_i\T{x_i}\big) \Big]. +\forall\,\lambda\in[0,1]^n,\quad L(\lambda) \defeq +\log\det\left(I_d + \sum_{i\in\mathcal{N}} \lambda_i x_i\T{x_i}\right), \end{displaymath} -%As in the case of \textsc{Coverage}, -\eqref{relax} is not a convex optimization problem, and is not easy to solve directly. %As in ~\cite{singer-influence}, -We consider an additional relaxation $L$ that -follows naturally by swapping the expectation and the $\log\det$ -in the above formula: -\begin{align}\label{eq:concave} - L(\lambda) & \defeq \log\det\Big(\mathbb{E}_{S\sim - P_\mathcal{N}^\lambda}\big[I_d + \sum_{i\in S} x_i\T{x_i}\big]\Big)\notag \\ - &= \log\det\left(I_d + \sum_{i\in\mathcal{N}} - \lambda_i x_i\T{x_i}\right) -\end{align} -This function is well-known to be concave and even self-concordant (see \emph{e.g.}, +and then use $L^* = OPT'_{-i^*}$ with $R=L$ in our mechanism. The function $L$ +is well-known to be concave and even self-concordant (see \emph{e.g.}, \cite{boyd2004convex}). In this case, the analysis of Newton's method for -self-concordant functions in \cite{boyd2004convex}, shows that it is possible -to find the maximum of $L$ to any precision $\varepsilon$ in -a number of iterations $O(\log\log\varepsilon^{-1})$. -The main challenge will -be to prove that $OPT'$ in~\eqref{relax}, for the relaxation $R=L$, is close to $V(S_G)$, which we will address later and is the technical -bulk of the -paper. -\junk{ -To do so, our -main technical contribution is to prove that $L$ has a bounded -approximation ratio to the value function $V$ (Lemma~\ref{lemma:relaxation}). -} +self-concordant functions in \cite{boyd2004convex}, shows that finding the +maximum of $L$ to any precision $\varepsilon$ can be done in +$O(\log\log\varepsilon^{-1})$ iterations. Being the solution to a maximization +problem, $L^*$ satisfies the required monotonicity property. The main challenge +will be to prove that $OPT'_{-i^*}$, for the relaxation $R=L$, is close to +$OPT$. As in~\cite{singer-influence} this will be done by using the +\emph{pipage rounding} framework of~\cite{pipage} and forms the technical bulk +of the paper. + +The resulting mechanism for \EDP{} is composed of +\begin{itemize} +\item + the allocation function +presented in Algorithm~\ref{mechanism}, and +\item +the payment function which pays each +allocated agent $i$ her threshold payment as described in Myerson's Theorem. +%The constant $C$ is an absolute +%constant that determined in Section~\ref{sec:proofofmainthm} +%(see \eqref{eq:c}). + In the case where +$\{i^*\}$ is the allocated set, her threshold payment is $B$ (she would be have +been dropped on line 1 of Algorithm~\ref{mechanism} had she reported a higher +cost). A closed-form formula for threshold payment when $S_G$ is the allocated +set can be found in~\cite{singer-mechanisms}. +\end{itemize} \begin{algorithm}[!t] \caption{Mechanism for \EDP{}}\label{mechanism} @@ -228,26 +157,6 @@ approximation ratio to the value function $V$ (Lemma~\ref{lemma:relaxation}). \EndIf \end{algorithmic} \end{algorithm} - -The resulting mechanism for \EDP{} is composed of -\begin{itemize} -\item - the allocation function -presented in Algorithm~\ref{mechanism}, and -\item -the payment function which pays each -allocated agent $i$ her threshold payment as described in Myerson's Theorem. -%The constant $C$ is an absolute -%constant that determined in Section~\ref{sec:proofofmainthm} -%(see \eqref{eq:c}). - In the case where -$\{i^*\}$ is the allocated set, her threshold payment is $B$ (she would be have -been dropped on line 1 of Algorithm~\ref{mechanism} had she reported a higher -cost). In the case where $S_G$ is the allocated set, threshold payments' -characterization from~\cite{singer-mechanisms} gives a formula to - compute these payments. - \end{itemize} - We can now state our main result: \begin{theorem}\label{thm:main} \sloppy @@ -276,6 +185,50 @@ There is no $2$-approximate, truthful, budget feasible, individually rational me Suppose, for contradiction, that such a mechanism exists. Consider two experiments with dimension $d=2$, such that $x_1 = e_1=[1 ,0]$, $x_2=e_2=[0,1]$ and $c_1=c_2=B/2+\epsilon$. Then, one of the two experiments, say, $x_1$, must be in the set selected by the mechanism, otherwise the ratio is unbounded, a contradiction. If $x_1$ lowers its value to $B/2-\epsilon$, by monotonicity it remains in the solution; by threshold payment, it is paid at least $B/2+\epsilon$. So $x_2$ is not included in the solution by budget feasibility and individual rationality: hence, the selected set attains a value $\log2$, while the optimal value is $2\log 2$. \end{proof} +\begin{comment} +$P_\mathcal{N}^\lambda(S)$ is the probability of choosing the set $S$ if +we select each element $i$ in $\mathcal{N}$ independently with probability +$\lambda_i$: %or to reject it with probability $1-\lambda_i$: +\begin{displaymath} + P_\mathcal{N}^\lambda(S) = \prod_{i\in S} \lambda_i + \prod_{i\in\mathcal{N}\setminus S}( 1 - \lambda_i) +\end{displaymath} +Consider the general \emph{multi-linear} +extension: +\begin{equation}\label{eq:multilinear} + F(\lambda) + = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[V(S)\right] + = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) V(S) +\end{equation} +\junk{ + +$P_\mathcal{N}^\lambda(S)$ is the probability of choosing the set $S$ if +we select each element $i$ in $\mathcal{N}$ independently with probability +$\lambda_i$: %or to reject it with probability $1-\lambda_i$: +\begin{displaymath} + P_\mathcal{N}^\lambda(S) = \prod_{i\in S} \lambda_i + \prod_{i\in\mathcal{N}\setminus S}( 1 - \lambda_i) +\end{displaymath} + +For \EDP\ the multi-linear extension +%of \eqref{modified} +can be written: +\begin{displaymath} + F(\lambda) = \mathbb{E}_{S\sim + P_\mathcal{N}^\lambda}\Big[\log\det \big(I_d + \sum_{i\in S} x_i\T{x_i}\big) \Big]. +\end{displaymath} +%As in the case of \textsc{Coverage}, +\eqref{relax} is not a convex optimization problem, and is not easy to solve directly. %As in ~\cite{singer-influence}, +We consider an additional relaxation $L$ that +follows naturally by swapping the expectation and the $\log\det$ +in the above formula: +\begin{align}\label{eq:concave} + L(\lambda) & \defeq \log\det\Big(\mathbb{E}_{S\sim + P_\mathcal{N}^\lambda}\big[I_d + \sum_{i\in S} x_i\T{x_i}\big]\Big)\notag \\ + &= \log\det\left(I_d + \sum_{i\in\mathcal{N}} + \lambda_i x_i\T{x_i}\right) +\end{align} +\end{comment} \subsection{Proof of Theorem~\ref{thm:main}}\label{sec:proofofmainthm} %\stratis{individual rationality???} %The proof of the properties of the mechanism is broken down into lemmas. |
