diff options
| author | Stratis Ioannidis <stratis@stratis-Latitude-E6320.(none)> | 2012-11-04 23:38:08 -0800 |
|---|---|---|
| committer | Stratis Ioannidis <stratis@stratis-Latitude-E6320.(none)> | 2012-11-04 23:38:08 -0800 |
| commit | cec9b298f548ce44fe5eaecf71c04d65c95de993 (patch) | |
| tree | 13f4281ec9d84ef688762b118902f8c530a5bd4c | |
| parent | c3bbc02ac28d7019588d0c97e39b9e249ee5442a (diff) | |
| download | recommendation-cec9b298f548ce44fe5eaecf71c04d65c95de993.tar.gz | |
opts
| -rw-r--r-- | main.tex | 228 | ||||
| -rw-r--r-- | problem.tex | 4 |
2 files changed, 117 insertions, 115 deletions
@@ -31,7 +31,7 @@ Let $S_G$ be the set computed by the greedy algorithm and define $i^*$: \end{displaymath} then the following inequality holds: \begin{displaymath} -OPT(V,\mathcal{N},B) \leq \frac{e}{e-1}\big( 3 V(S_G) + 2 V(i^*)\big). +OPT \leq \frac{e}{e-1}\big( 3 V(S_G) + 2 V(i^*)\big). \end{displaymath} \end{lemma} @@ -56,8 +56,8 @@ overall polynomial complexity for the allocation algorithm. \sloppy When the underlying full information problem \eqref{eq:non-strategic} can be solved in -polynomial time, Chen et al. \cite{chen} prove that $OPT(V,\mathcal{N}\setminus\{i^*\},B)$ can play the role of this quantity: that is, allocating to $i^*$ when -$V(i^*) \geq C\cdot OPT(V,\mathcal{N}\setminus\{i^*\},B)$ (for some constant $C$) +polynomial time, Chen et al. \cite{chen} prove that $OPT_{-i^*}$, the optimal solution to \eqref{eq:non-strategic} when $i^*$ is excluded from set $\mathcal{N}$, can play the role of this quantity: that is, allocating to $i^*$ when +$V(i^*) \geq C\cdot OPT_{-i^*}$ (for some constant $C$) and to $S_G$ otherwise yields a 8.34-approximation mechanism. However, this is not a poly-time mechanism when the underlying problem is NP hard (unless P=NP), as is the case for \EDP. \fussy @@ -65,26 +65,26 @@ For NP-hard problems, Chen et al.~\cite{chen} %for \textsc{Knapsack} and Singer %\cite{singer-influence} for \textsc{Coverage} instead propose comparing -$V(i^*)$ to %$OPT(R_{\mathcal{N}\setminus\{i\}}, B)$, where $R_\mathcal{N}$ denotes +$V(i^*)$ to %$OPT(R_{\mathcal{N}\setminus\{i\}}, B)$, where $R$ denotes the optimal value of a \emph{fractional relaxation} of the function $V$ over the set -$\mathcal{N}$. A fuction $R_\mathcal{N}:[0, 1]^{n}\to\reals_+$ defined on the hypercube $[0, 1]^{n}$ is a fractional relaxation of $V$ -over the set $\mathcal{N}$ if %(a) $R_\mathcal{N}$ is a function defined on the hypercube $[0, 1]^{n}$ and (b) - $R_\mathcal{N}(\mathbf{1}_S) = V(S)$ for all $S\subseteq\mathcal{N}$, where +$\mathcal{N}$. A fuction $R:[0, 1]^{n}\to\reals_+$ defined on the hypercube $[0, 1]^{n}$ is a fractional relaxation of $V$ +over the set $\mathcal{N}$ if %(a) $R$ is a function defined on the hypercube $[0, 1]^{n}$ and (b) + $R(\mathbf{1}_S) = V(S)$ for all $S\subseteq\mathcal{N}$, where $\mathbf{1}_S$ denotes the indicator vector of $S$. The optimization program \eqref{eq:non-strategic} extends naturally to such relaxations: \begin{align} - OPT(R_\mathcal{N}, B) = \argmax_{\lambda\in[0, 1]^{n}} - \left\{R_\mathcal{N}(\lambda) \mid \sum_{i=1}^{n} \lambda_i c_i + OPT' = \argmax_{\lambda\in[0, 1]^{n}} + \left\{R(\lambda) \mid \sum_{i=1}^{n} \lambda_i c_i \leq B\right\}\label{relax} \end{align} -To attain truthful constant approximation mechanism for \textsc{Knapsack}, Chen \emph{et al.}~\cite{chen} substitute $OPT(R_\mathcal{N},B)$ for $OPT(V,\mathcal{N},B)$ in the above program, where $R_{\mathcal{N}}$ are relaxations of the \textsc{Knapsack} objectives. +To attain truthful constant approximation mechanism for \textsc{Knapsack}, Chen \emph{et al.}~\cite{chen} substitute $OPT'_{-i^*}$ for $OPT_{-i^*}$ in the above program, where $R$ is a relaxation of the \textsc{Knapsack} objective. Similarly, Singer~\cite{singer-influence} follows the same approach to obtain a mechanism for \textsc{Coverage}. A relaxation which is commonly used, due to its well-behaved properties in the context of maximizing submodular functions, is the \emph{multi-linear} extension: \begin{equation}\label{eq:multilinear} - F_\mathcal{N}(\lambda) + F(\lambda) = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[V(S)\right] = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) V(S) \end{equation} @@ -106,14 +106,14 @@ extension using the \emph{pipage rounding} method of Ageev and Sviridenko Here, following these ideas, we introduce a relaxation specifically tailored to the value function of \EDP. The multi-linear extension of \eqref{modified} can be written: \begin{displaymath} - F_\mathcal{N}(\lambda) = \mathbb{E}_{S\sim + F(\lambda) = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\Big[\log\det \big(I_d + \sum_{i\in S} x_i\T{x_i}\big) \Big]. \end{displaymath} As in the case of \textsc{Coverage}, \eqref{relax} is not a convex optimization problem, and is not easy to solve directly. %As in ~\cite{singer-influence}, -We thus introduce an additional relaxation $L_{\mathcal{N}}$. Our relaxation follows naturally by swapping the expectation and the $\log\det$ +We thus introduce an additional relaxation $L$. Our relaxation follows naturally by swapping the expectation and the $\log\det$ in the above formula: \begin{align}\label{eq:concave} - L_\mathcal{N}(\lambda) & \defeq \log\det\Big(\mathbb{E}_{S\sim + L(\lambda) & \defeq \log\det\Big(\mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\big[\log\det (I_d + \sum_{i\in S} x_i\T{x_i})\big]\Big)\notag \\ &= \log\det\left(I_d + \sum_{i\in\mathcal{N}} \lambda_i x_i\T{x_i}\right) @@ -121,10 +121,10 @@ in the above formula: This function is well-known to be concave and even self-concordant (see \emph{e.g.}, \cite{boyd2004convex}). In this case, the analysis of Newton's method for self-concordant functions in \cite{boyd2004convex}, shows that it is possible -to find the maximum of $L_\mathcal{N}$ to any precision $\varepsilon$ in +to find the maximum of $L$ to any precision $\varepsilon$ in a number of iterations $O(\log\log\varepsilon^{-1})$. The main challenge will -be to prove that $OPT(L_\mathcal{N}, B)$ is close to $V(S_G)$. To do so, our -main technical contribution is to prove that $L_\mathcal{N}$ has a bounded +be to prove that $OPT'$, for the relaxation $R=L$, is close to $V(S_G)$. To do so, our +main technical contribution is to prove that $L$ has a bounded approximation ratio to the value function $V$ (Lemma~\ref{lemma:relaxation}). \begin{algorithm}[!t] @@ -132,10 +132,10 @@ approximation ratio to the value function $V$ (Lemma~\ref{lemma:relaxation}). \begin{algorithmic}[1] \State $\mathcal{N} \gets \mathcal{N}\setminus\{i\in\mathcal{N} : c_i > B\}$ \State $i^* \gets \argmax_{j\in\mathcal{N}}V(j)$ - \State $\xi \gets \argmax_{\lambda\in[0,1]^{n}} \{L_{\mathcal{N}\setminus\{i^*\}}(\lambda) - \mid \sum_{i \in \mathcal{N}\setminus\{i^*\}}c_i\lambda_i\leq B\}$, with precision $\epsilon>0$ + \State $\xi \gets \argmax_{\lambda\in[0,1]^{n}} \{L(\lambda) + \mid \lambda_{i^*}=0,\sum_{i \in \mathcal{N}\setminus\{i^*\}}c_i\lambda_i\leq B\}$ \Statex - \If{$L_{\mathcal{N}\setminus\{i^*\}}(\xi) < C \cdot V(i^*)$} \label{c} + \If{$L(\xi) < C \cdot V(i^*)$} \label{c} \State \textbf{return} $\{i^*\}$ \Else \State $i \gets \argmax_{1\leq j\leq n}\frac{V(j)}{c_j}$ @@ -169,7 +169,7 @@ We can now state our main result: has complexity $O\left(\text{poly}(n, d, \log\log \varepsilon^{-1})\right)$ and returns a set $S^*$ such that: \begin{align*} - OPT(V,\mathcal{N}, B) + OPT & \leq \frac{14e-3 + \sqrt{160e^2-48e + 9}}{2(e-1)} V(S^*)+ \varepsilon\\ & \simeq 19.68V(S^*) + \varepsilon @@ -192,9 +192,9 @@ Suppose, for contradiction, that such a mechanism exists. Consider two experimen We now present the proof of Theorem~\ref{thm:main}. Truthfulness and individual rationality follows from monotonicity and threshold payments. Monotonicity and budget feasibility follows from the analysis of Chen \emph{et al.} \cite{chen}; -we briefly restate the proofs for the sake of completeness. +we briefly restate the proofs below for the sake of completeness. Our proof of the approximation ratio and uses a bound on our concave relaxation -$L_\mathcal{N}$ (Lemma~\ref{lemma:relaxation}). This is our main technical +$L$ (Lemma~\ref{lemma:relaxation}). This is our main technical contribution; the proof of this lemma can be found in Section~\ref{sec:relaxation}. \begin{lemma}\label{lemma:monotone} The mechanism is monotone. @@ -203,7 +203,7 @@ The mechanism is monotone. Consider an agent $i$ with cost $c_i$ that is selected by the mechanism, and suppose that she reports a cost $c_i'\leq c_i$ while all other costs stay the same. - Suppose that when $i$ reports $c_i$, $L_{\mathcal{N}\setminus\{i^*\}}(\xi) \geq C V(i^*)$; then, as $s_i(c_i,c_{-i})=1$, $i\in S_G$. + Suppose that when $i$ reports $c_i$, $L(\xi) \geq C V(i^*)$; then, as $s_i(c_i,c_{-i})=1$, $i\in S_G$. By reporting a cost $c_i'\leq c_i$, $i$ may be selected at an earlier iteration of the greedy algorithm. %using the submodularity of $V$, we see that $i$ will satisfy the greedy %selection rule: @@ -220,17 +220,17 @@ The mechanism is monotone. \frac{B}{2}\frac{V(S_i\cup\{i\})-V(S_i)}{V(S_i\cup\{i\})} \leq \frac{B}{2}\frac{V(S_i'\cup\{i\})-V(S_i')}{V(S_i'\cup\{i\})} \end{align*} - by the monotonicity and submodularity of $V$. Hence $i\in S_G'$. As $L_{\mathcal{N}\setminus \{i^*\}}(\xi)$ is the optimal value of \eqref{relax} under relaxation $L_{\mathcal{N}}$, reducing the costs can only increase this value, so under $c'_i\leq c_i$ the greedy set is still allocated and $s_i(c_i',c_{-i}) =1$. - Suppose now that when $i$ reports $c_i$, $L_{\mathcal{N}\setminus \{i^*\}}(\xi) < C V(i^*)$. Then $s_i(c_i,c_{-i})=1$ iff $i = i^*$. + by the monotonicity and submodularity of $V$. Hence $i\in S_G'$. As $L(\xi)$, is the optimal value of \eqref{relax} under relaxation $L$ when $i^*$ is excluded from $\mathcal{N}$, reducing the costs can only increase this value, so under $c'_i\leq c_i$ the greedy set is still allocated and $s_i(c_i',c_{-i}) =1$. + Suppose now that when $i$ reports $c_i$, $L(\xi) < C V(i^*)$. Then $s_i(c_i,c_{-i})=1$ iff $i = i^*$. Reporting $c_{i^*}'\leq c_{i^*}$ does not change $V(i^*)$ nor - $L_{\mathcal{N}\setminus \{i^*\}}(xi) \leq C V(i^*)$; thus $s_{i^*}(c_{i^*}',c_{-i^*})=1$. + $L(\xi) \leq C V(i^*)$; thus $s_{i^*}(c_{i^*}',c_{-i^*})=1$. \end{proof} \begin{lemma}\label{lemma:budget-feasibility} The mechanism is budget feasible. \end{lemma} \begin{proof} -Suppose that $L_{\mathcal{N}\setminus\{i^*\}}(\xi) < C V(i^*)$. Then the mechanism selects $i^*$; as the bid of $i^*$ does not affect the above condition, the threshold payment of $i^*$ is $B$ and the mechanism is budget feasible. -Suppose thus that $L_{\mathcal{N}\setminus\{i^*\}}(\xi) \geq C V(i^*)$. +Suppose that $L(\xi) < C V(i^*)$. Then the mechanism selects $i^*$; as the bid of $i^*$ does not affect the above condition, the threshold payment of $i^*$ is $B$ and the mechanism is budget feasible. +Suppose thus that $L(\xi) \geq C V(i^*)$. Denote by $S_G$ the set selected by the greedy algorithm, and for $i\in S_G$, denote by $S_i$ the subset of the solution set that was selected by the greedy algorithm just prior to the addition of $i$---both sets determined for the present cost vector $c$. Chen \emph{et al.}~\cite{chen} show that, for any submodular function $V$, and for all $i\in S_G$: %the reported cost of an agent selected by the greedy heuristic, and holds for @@ -274,49 +274,49 @@ Hence, the total payment is bounded by the telescopic sum: \end{proof} Finally, we prove the approximation ratio of the mechanism. We use the -following lemma, which establishes the optimal fractional relaxation -$L_\mathcal{N}$ is not too far from $OPT(V,\mathcal{N},B)$. +following lemma, which establishes that $OPT'$, the optimal value \eqref{relax} of the fractional relaxation $L$ under the budget constraints + is not too far from $OPT$. \begin{lemma}\label{lemma:relaxation} %\begin{displaymath} - $ OPT(L_\mathcal{N}, B) \leq 4 OPT(V,\mathcal{N},B) + $ OPT' \leq 4 OPT + 2\max_{i\in\mathcal{N}}V(i)$ %\end{displaymath} \end{lemma} -Note that the lower bound $OPT(L_\mathcal{N}, B) \geq OPT(V,\mathcal{N},B) -$ also holds trivially, as $L_{\mathcal{N}}$ is a fractional relaxation of $V$ over $\mathcal{N}$. +Note that the lower bound $OPT(L, B) \geq OPT +$ also holds trivially, as $L$ is a fractional relaxation of $V$ over $\mathcal{N}$. The proof of Lemma~\ref{lemma:relaxation} is our main technical contribution, and can be found in Section \ref{sec:relaxation}. Using this lemma, we can complete the proof of Theorem~\ref{thm:main} by showing that, %\begin{lemma}\label{lemma:approx} %C_{\textrm{max}} = \max\left(1+C,\frac{3e}{e-1}\left( 1 + \frac{8e}{C %(e-1) -10e +2}\right)\right) - for any $\varepsilon > 0$, if $L(x^*)$ has been computed to a precision + for any $\varepsilon > 0$, if $OPT_{-i}'$, the optimal value of $L$ when $i^*$ is excluded from $\mathcal{N}$, has been computed to a precision $\varepsilon$, then the set $S^*$ allocated by the mechanism is such that: \begin{align} - OPT(V,\mathcal{N}, B) + OPT \leq \frac{14e\!-\!3 + \sqrt{160e^2\!-\!48e\!+\!9}}{2(e\!-\!1)} V(S^*)\!+\! \varepsilon \label{approxbound} \end{align} %\end{lemma} - To see this, let $x^*\in [0,1]^{n-1}$ be the true maximizer of $L_{\mathcal{N}\setminus\{i^*\}}$ subject to the budget constraints. Assume that on line 3 of algorithm~\ref{mechanism}, a quantity - $\tilde{L}$ such that $\tilde{L}-\varepsilon\leq L_{\mathcal{N}\setminus {i^*}}(x^*) \leq - \tilde{L}+\varepsilon$ has been computed (lemma~\ref{lemma:complexity} + To see this, let $OPT_{-i^*}'$ be the true maximum value of $L$ subject to $\lambda_{i^*}=0$, $\sum_{i\in \mathcal{N}\setminus{i^*}}c_i\leq B$. Assume that on line 3 of algorithm~\ref{mechanism}, a quantity + $\tilde{L}$ such that $\tilde{L}-\varepsilon\leq OPT_{-i^*}' \leq + \tilde{L}+\varepsilon$ has been computed (Lemma~\ref{lemma:complexity} states that this is computed in time within our complexity guarantee). If the condition on line 3 of the algorithm holds, then: \begin{displaymath} - V(i^*) \geq \frac{1}{C}L_{N\setminus \{i\}}(x^*)-\frac{\varepsilon}{C} \geq - \frac{1}{C}OPT(V,\mathcal{N}\setminus\{i^*\}, B) -\frac{\varepsilon}{C} + V(i^*) \geq \frac{1}{C}OPT_{-i^*}'-\frac{\varepsilon}{C} \geq + \frac{1}{C}OPT_{-i^*} -\frac{\varepsilon}{C} \end{displaymath} - as $L$ is a fractional relaxation of $V$ on $\mathcal{N}\setminus \{i^*\}$. Also, + as $L$ is a fractional relaxation of $V$. Also, \begin{displaymath} - OPT(V,\mathcal{N},B) \leq OPT(V,\mathcal{N}\setminus\{i^*\}, B) + V(i^*) + OPT \leq OPT_{-i^*} + V(i^*) \end{displaymath} Hence: \begin{equation}\label{eq:bound1} - OPT(V,\mathcal{N}, B)\leq (1+C)V(i^*) + \varepsilon + OPT\leq (1+C)V(i^*) + \varepsilon \end{equation} - Note that $OPT(L_{\mathcal{N}\setminus\{i^*\}},B)\leq OPT(L_{\mathcal{N}},B)$. If the condition does not hold, + Note that $OPT_{-i^*}'\leq OPT'$. If the condition does not hold, from Lemmas \ref{lemma:relaxation} and \ref{lemma:greedy-bound}: \begin{align*} - V(i^*) & \stackrel{}\leq \frac{1}{C}L_{\mathcal{N}}(x^*) + \frac{\varepsilon}{C}\leq \frac{1}{C} - \big(4 OPT(V,\mathcal{N}, B) + 2 V(i^*)\big) + \frac{\varepsilon}{C}\\ + V(i^*) & \stackrel{}\leq \frac{1}{C}OPT_{-i^*}'+ \frac{\varepsilon}{C}\leq \frac{1}{C} + \big(4 OPT + 2 V(i^*)\big) + \frac{\varepsilon}{C}\\ & \leq \frac{1}{C}\left(\frac{4e}{e-1}\big(3 V(S_G) + 2 V(i^*)\big) + 2 V(i^*)\right) + \frac{\varepsilon}{C} @@ -354,18 +354,18 @@ This solution is: gives the approximation ratio in \eqref{approxbound}, and concludes the proof of Theorem~\ref{thm:main}.\hspace*{\stretch{1}}\qed %\end{proof} \subsection{Proof of Lemma~\ref{lemma:relaxation}}\label{sec:relaxation} -%Recall that, since $L_\mathcal{N}$ is a fractional relaxation of $V$, +%Recall that, since $L$ is a fractional relaxation of $V$, %\begin{displaymath} -% OPT(V,\mathcal{N},B) \leq OPT(L_\mathcal{N}, B). +% OPT \leq OPT(L, B). %\end{displaymath} %However, for the purpose of proving theorem~\ref{thm:main}, we need to bound -%$L_\mathcal{N}$ from above (up to a multiplicative factor) by $V$. +%$L$ from above (up to a multiplicative factor) by $V$. To prove Lemma~\ref{lemma:relaxation}, we use the \emph{pipage rounding} method of Ageev and Sviridenko~\cite{pipage}, where -$L_\mathcal{N}$ is first bounded from above by the multi-linear relaxation $F_\mathcal{N}$, which is itself +$L$ is first bounded from above by the multi-linear relaxation $F$, which is itself subsequently bounded by $V$. While the latter part is general and can be applied to the multi-linear extension of any submodular function, the former part is -specific to our choice for the function $L_\mathcal{N}$. %and is our main technical contribution (lemma~\ref{lemma:relaxation-ratio}). +specific to our choice for the function $L$. %and is our main technical contribution (lemma~\ref{lemma:relaxation-ratio}). We first prove a variant of the $\varepsilon$-convexity of the multi-linear extension \eqref{eq:multilinear} introduced by Ageev and Sviridenko @@ -375,7 +375,7 @@ referred to in the literature as \emph{cross-convexity} (see, \emph{e.g.}, \cite{dughmi}). Formally, %this property states that if we define: \begin{displaymath} - \tilde{F}_\lambda(\varepsilon) \defeq F_\mathcal{N}\big(\lambda + \varepsilon(e_i + \tilde{F}_\lambda(\varepsilon) \defeq F\big(\lambda + \varepsilon(e_i - e_j)\big) \end{displaymath} where $e_i$ and $e_j$ are two vectors of the standard basis of @@ -388,7 +388,7 @@ or the $j$-th component of $\lambda$ becomes integral. The lemma below proves that we can similarly trade a fractional component for an other until one of them becomes integral \emph{while maintaining the -feasibility of the point at which $F_\mathcal{N}$ is evaluated}. Here, by feasibility of +feasibility of the point at which $F$ is evaluated}. Here, by feasibility of a point $\lambda$, we mean that it satisfies the budget constraint $\sum_{i=1}^n \lambda_i c_i \leq B$. \begin{lemma}[Rounding]\label{lemma:rounding} For any feasible $\lambda\in[0,1]^{n}$, there exists a feasible @@ -404,7 +404,7 @@ a point $\lambda$, we mean that it satisfies the budget constraint $\sum_{i=1}^n two fractional components, returns some feasible $\lambda'$ with one less fractional component such that: \begin{displaymath} - F_\mathcal{N}(\lambda) \leq F_\mathcal{N}(\lambda') + F(\lambda) \leq F(\lambda') \end{displaymath} Applying this procedure recursively yields the lemma's result. Let us consider such a feasible $\lambda$. Let $i$ and $j$ be two @@ -448,49 +448,19 @@ a point $\lambda$, we mean that it satisfies the budget constraint $\sum_{i=1}^n For all $\lambda\in[0,1]^{n},$ %\begin{displaymath} $ \frac{1}{2} - \,L_\mathcal{N}(\lambda)\leq - F_\mathcal{N}(\lambda)\leq L_{\mathcal{N}}(\lambda)$. + \,L(\lambda)\leq + F(\lambda)\leq L(\lambda)$. %\end{displaymath} \end{lemma} \begin{proof} + The bound $F_{\mathcal{N}}(\lambda)\leq L_{\mathcal{N}(\lambda)}$ follows by the concavity of the $\log\det$ function. We will prove that $\frac{1}{2}$ is a lower bound of the ratio $\partial_i - F_\mathcal{N}(\lambda)/\partial_i L_\mathcal{N}(\lambda)$, where - $\partial_i\, \cdot$ denotes the partial derivative of a function with respect to the - $i$-th variable. This will be enough to conclude, by observing the following cases. - First, if the minimum of the ratio - $F_\mathcal{N}(\lambda)/L_\mathcal{N}(\lambda)$ is attained at a point interior to the hypercube, then it is - a critical point; such a critical point is - characterized by: - \begin{equation}\label{eq:lhopital} - \frac{F_\mathcal{N}(\lambda)}{L_\mathcal{N}(\lambda)} - = \frac{\partial_i F_\mathcal{N}(\lambda)}{\partial_i - L_\mathcal{N}(\lambda)} \geq \frac{1}{2} - \end{equation} - Second, if the minimum is attained as - $\lambda$ converges to zero in, \emph{e.g.}, the $l_2$ norm, by the Taylor approximation, one can write: - \begin{displaymath} - \frac{F_\mathcal{N}(\lambda)}{L_\mathcal{N}(\lambda)} - \sim_{\lambda\rightarrow 0} - \frac{\sum_{i\in \mathcal{N}}\lambda_i\partial_i F_\mathcal{N}(0)} - {\sum_{i\in\mathcal{N}}\lambda_i\partial_i L_\mathcal{N}(0)} - \geq \frac{1}{2}, - \end{displaymath} - \emph{i.e.}, the ratio $\frac{F_\mathcal{N}(\lambda)}{L_\mathcal{N}(\lambda)}$ is necessarily bounded from below by 1/2 for small enough $\lambda$. - Finally, if the minimum is attained on a face of the hypercube $[0,1]^n$ (a face is - defined as a subset of the hypercube where one of the variable is fixed to - 0 or 1), without loss of generality, we can assume that the minimum is - attained on the face where the $n$-th variable has been fixed - to 0 or 1. Then, either the minimum is attained at a point interior to the - face or on a boundary of the face. In the first case, relation - \eqref{eq:lhopital} still characterizes the minimum for $i< n$. - In the second case, by repeating the argument again by induction, we see - that all is left to do is to show that the bound holds for the vertices of - the cube (the faces of dimension 1). The vertices are exactly the binary - points, for which we know that both relaxations are equal to the value - function $V$. Hence, the ratio is equal to 1 on the vertices. - - Let us start by computing the derivatives of $F_\mathcal{N}$ and - $L_\mathcal{N}$ with respect to the $i$-th component. + F(\lambda)/\partial_i L(\lambda)$, where + $\partial_i\, \cdot$ denotes the partial derivative with respect to the + $i$-th variable. + + Let us start by computing the derivatives of $F$ and + $L$ with respect to the $i$-th component. For $F$, it suffices to look at the derivative of $P_\mathcal{N}^\lambda(S)$: \begin{displaymath} @@ -503,7 +473,7 @@ For all $\lambda\in[0,1]^{n},$ \end{displaymath} Hence: \begin{multline*} - \partial_i F_\mathcal{N} = + \partial_i F = \sum_{\substack{S\subseteq\mathcal{N}\\ i\in S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S\setminus\{i\})V(S) - \sum_{\substack{S\subseteq\mathcal{N}\\ i\in \mathcal{N}\setminus S}} @@ -512,7 +482,7 @@ For all $\lambda\in[0,1]^{n},$ Now, using that every $S$ such that $i\in S$ can be uniquely written as $S'\cup\{i\}$, we can write: \begin{multline*} - \partial_i F_\mathcal{N} = + \partial_i F = \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S)V(S\cup\{i\}) - \sum_{\substack{S\subseteq\mathcal{N}\\ i\in \mathcal{N}\setminus S}} @@ -523,15 +493,15 @@ For all $\lambda\in[0,1]^{n},$ $ V(S\cup\{i\}) - V(S) = \frac{1}{2}\log\left(1 + \T{x_i} A(S)^{-1}x_i\right)$. Using this, \begin{displaymath} - \partial_i F_\mathcal{N}(\lambda) = \frac{1}{2} + \partial_i F(\lambda) = \frac{1}{2} \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S) \log\Big(1 + \T{x_i}A(S)^{-1}x_i\Big) \end{displaymath} - where $A(S) =I_d+ \T{X_S}X_S$. The computation of the derivative of $L_\mathcal{N}$ uses standard matrix + where $A(S) =I_d+ \T{X_S}X_S$. The computation of the derivative of $L$ uses standard matrix calculus and gives: \begin{displaymath} - \partial_i L_\mathcal{N}(\lambda) + \partial_i L(\lambda) =\frac{1}{2} \T{x_i}\tilde{A}(\lambda)^{-1}x_i \end{displaymath} where $\tilde{A}(\lambda) = I_d+\sum_{i\in \mathcal{N}}\lambda_ix_i\T{x_i}$. Using the following inequalities (where $A\geq B$ implies $A-B$ is positive semi-definite): @@ -545,7 +515,7 @@ Using this, \end{gather*} we get: \begin{align*} - \partial_i F_\mathcal{N}(\lambda) + \partial_i F(\lambda) % & = \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_\mathcal{N}^\lambda(S) \log\Big(1 + \T{x_i}A(S)^{-1}x_i\Big)\\ & \geq \frac{1}{4} \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} @@ -570,43 +540,75 @@ Using this, \end{displaymath} Hence: \begin{displaymath} - \partial_i F_\mathcal{N}(\lambda) \geq + \partial_i F(\lambda) \geq \frac{1}{4} \T{x_i}\bigg(\sum_{S\subseteq\mathcal{N}}P_\mathcal{N}^\lambda(S)A(S)^{-1}\bigg)x_i \end{displaymath} Finally, using that the inverse is a matrix convex function over symmetric positive definite matrices: \begin{align*} - \partial_i F_\mathcal{N}(\lambda) &\geq + \partial_i F(\lambda) &\geq \frac{1}{4} \T{x_i}\bigg(\sum_{S\subseteq\mathcal{N}}P_\mathcal{N}^\lambda(S)A(S)\bigg)^{-1}x_i \geq \frac{1}{2} - \partial_i L_\mathcal{N}(\lambda)\qed - \end{align*} + \partial_i L(\lambda) \end{align*} + +This will be enough to conclude, by observing the following cases. + First, if the minimum of the ratio + $F(\lambda)/L(\lambda)$ is attained at a point interior to the hypercube, then it is + a critical point; such a critical point is + characterized by: + \begin{equation}\label{eq:lhopital} + \frac{F(\lambda)}{L(\lambda)} + = \frac{\partial_i F(\lambda)}{\partial_i + L(\lambda)} \geq \frac{1}{2} + \end{equation} + Second, if the minimum is attained as + $\lambda$ converges to zero in, \emph{e.g.}, the $l_2$ norm, by the Taylor approximation, one can write: + \begin{displaymath} + \frac{F(\lambda)}{L(\lambda)} + \sim_{\lambda\rightarrow 0} + \frac{\sum_{i\in \mathcal{N}}\lambda_i\partial_i F(0)} + {\sum_{i\in\mathcal{N}}\lambda_i\partial_i L(0)} + \geq \frac{1}{2}, + \end{displaymath} + \emph{i.e.}, the ratio $\frac{F(\lambda)}{L(\lambda)}$ is necessarily bounded from below by 1/2 for small enough $\lambda$. + Finally, if the minimum is attained on a face of the hypercube $[0,1]^n$ (a face is + defined as a subset of the hypercube where one of the variable is fixed to + 0 or 1), without loss of generality, we can assume that the minimum is + attained on the face where the $n$-th variable has been fixed + to 0 or 1. Then, either the minimum is attained at a point interior to the + face or on a boundary of the face. In the first case, relation + \eqref{eq:lhopital} still characterizes the minimum for $i< n$. + In the second case, by repeating the argument again by induction, we see + that all is left to do is to show that the bound holds for the vertices of + the cube (the faces of dimension 1). The vertices are exactly the binary + points, for which we know that both relaxations are equal to the value + function $V$. Hence, the ratio is equal to 1 on the vertices. \end{proof} - To prove Lemma~\ref{lemma:relaxation}, let us consider a feasible point $\lambda^*\in[0,1]^{n}$ such that $L_\mathcal{N}(\lambda^*) - = OPT(L_\mathcal{N}, B)$. By applying Lemma~\ref{lemma:relaxation-ratio} + To prove Lemma~\ref{lemma:relaxation}, let us consider a feasible point $\lambda^*\in[0,1]^{n}$ such that $L(\lambda^*) + = OPT(L, B)$. By applying Lemma~\ref{lemma:relaxation-ratio} and Lemma~\ref{lemma:rounding} we get a feasible point $\bar{\lambda}$ with at most one fractional component such that: \begin{equation}\label{eq:e1} - L_\mathcal{N}(\lambda^*) \leq 2 - F_\mathcal{N}(\bar{\lambda}) + L(\lambda^*) \leq 2 + F(\bar{\lambda}) \end{equation} Let $\lambda_i$ denote the fractional component of $\bar{\lambda}$ and $S$ denote the set whose indicator vector is $\bar{\lambda} - \lambda_i e_i$. - Using the fact that $F_\mathcal{N}$ is linear with respect to the $i$-th + Using the fact that $F$ is linear with respect to the $i$-th component and is a relaxation of the value function, we get: \begin{displaymath} - F_\mathcal{N}(\bar{\lambda}) = V(S) +\lambda_i V(S\cup\{i\}) + F(\bar{\lambda}) = V(S) +\lambda_i V(S\cup\{i\}) \end{displaymath} Using the submodularity of $V$: \begin{displaymath} - F_\mathcal{N}(\bar{\lambda}) \leq 2 V(S) + V(i) + F(\bar{\lambda}) \leq 2 V(S) + V(i) \end{displaymath} Note that since $\bar{\lambda}$ is feasible, $S$ is also feasible and - $V(S)\leq OPT(V,\mathcal{N}, B)$. Hence: + $V(S)\leq OPT$. Hence: \begin{equation}\label{eq:e2} - F_\mathcal{N}(\bar{\lambda}) \leq 2 OPT(V,\mathcal{N}, B) + F(\bar{\lambda}) \leq 2 OPT + \max_{i\in\mathcal{N}} V(i) \end{equation} Together, \eqref{eq:e1} and \eqref{eq:e2} imply the lemma. \hspace*{\stretch{1}}\qed diff --git a/problem.tex b/problem.tex index 2a40185..98ca4ac 100644 --- a/problem.tex +++ b/problem.tex @@ -49,7 +49,7 @@ knowledge; the objective of the buyer in this context is to select a set $S$ maximizing the value $V(S)$ subject to the constraint $\sum_{i\in S} c_i\leq B$. We write: \begin{equation}\label{eq:non-strategic} - OPT(V,\mathcal{N}, B) = \max_{S\subseteq\mathcal{N}} \left\{V(S) \mid + OPT = \max_{S\subseteq\mathcal{N}} \left\{V(S) \mid \sum_{i\in S}c_i\leq B\right\} \end{equation} for the optimal value achievable in the full-information case. %\stratis{Should be $OPT(V,c,B)$\ldots better drop the arguments here and introduce them wherever necessary.} @@ -82,7 +82,7 @@ A mechanism is truthful iff every $i \in \mathcal{N}$ and every two cost vector \eqref{eq:non-strategic}. Formally, there must exist some $\alpha\geq 1$ such that: \begin{displaymath} - OPT(V,\mathcal{N}, B) \leq \alpha V(S). + OPT \leq \alpha V(S). \end{displaymath} The approximation ratio captures the \emph{price of truthfulness}, \emph{i.e.}, the relative value loss incurred by adding the truthfulness constraint. % to the value function maximization. \item \emph{Computational efficiency.} The allocation and payment function |
