diff options
| -rw-r--r-- | main.tex | 87 |
1 files changed, 40 insertions, 47 deletions
@@ -107,15 +107,14 @@ Here, following these ideas, we introduce a relaxation specifically tailored to function of \EDP. The multi-linear extension of \eqref{modified} can be written: \begin{displaymath} F_\mathcal{N}(\lambda) = \mathbb{E}_{S\sim - P_\mathcal{N}^\lambda}\left[\log\det A(S) \right] - \;\text{with}\; A(S) = I_d + \sum_{i\in S} x_i\T{x_i} + P_\mathcal{N}^\lambda}\Big[\log\det \big(I_d + \sum_{i\in S} x_i\T{x_i}\big) \Big]. \end{displaymath} As in the case of \textsc{Coverage}, \eqref{relax} is not a convex optimization problem, and is not easy to solve directly. %As in ~\cite{singer-influence}, We thus introduce an additional relaxation $L_{\mathcal{N}}$. Our relaxation follows naturally by swapping the expectation and the $\log\det$ in the above formula: \begin{align}\label{eq:concave} - L_\mathcal{N}(\lambda) & \defeq \log\det\left(\mathbb{E}_{S\sim - P_\mathcal{N}^\lambda}[A(S)]\right)\notag \\ + L_\mathcal{N}(\lambda) & \defeq \log\det\Big(\mathbb{E}_{S\sim + P_\mathcal{N}^\lambda}\big[\log\det (I_d + \sum_{i\in S} x_i\T{x_i})\big]\Big)\notag \\ &= \log\det\left(I_d + \sum_{i\in\mathcal{N}} \lambda_i x_i\T{x_i}\right) \end{align} @@ -128,8 +127,7 @@ be to prove that $OPT(L_\mathcal{N}, B)$ is close to $V(S_G)$. To do so, our main technical contribution is to prove that $L_\mathcal{N}$ has a bounded approximation ratio to the value function $V$ (Lemma~\ref{lemma:relaxation}). -The resulting mechanism for \EDP{} is presented in Algorithm~\ref{mechanism}. The constant $C$ is an absolute constant, that we determine in Section~\ref{sec:proofofmainthm} (see \eqref{eq:C}). -\begin{algorithm} +\begin{algorithm}[!t] \caption{Mechanism for \EDP{}}\label{mechanism} \begin{algorithmic}[1] \State $i^* \gets \argmax_{j\in\mathcal{N}}V(j)$ @@ -150,6 +148,7 @@ The resulting mechanism for \EDP{} is presented in Algorithm~\ref{mechanism}. Th \EndIf \end{algorithmic} \end{algorithm} +The resulting mechanism for \EDP{} is presented in Algorithm~\ref{mechanism}. The constant $C$ is an absolute constant, that we determine in Section~\ref{sec:proofofmainthm} (see \eqref{eq:c}). We can now state our main result: \begin{theorem}\label{thm:main} There exists an absolute constant $C$ such that the mechanism described in Algorithm~\ref{mechanism} is truthful, individiually rational @@ -335,26 +334,26 @@ This solution is: gives the approximation ratio in \eqref{approxbound}, and concludes the proof of Theorem~\ref{thm:main}.\hspace*{\stretch{1}}\qed %\end{proof} \subsection{Proof of Lemma~\ref{lemma:relaxation}}\label{sec:relaxation} - -Since $L_\mathcal{N}$ is a relaxation of the value function $V$, we have: -\begin{displaymath} - OPT(V,\mathcal{N},B) \leq OPT(L_\mathcal{N}, B) -\end{displaymath} -However, for the purpose of proving theorem~\ref{thm:main}, we need to bound -$L_\mathcal{N}$ from above (up to a multiplicative factor) by $V$. We use the -\emph{pipage rounding} method from Ageev and Sviridenko \cite{pipage}, where -$L_\mathcal{N}$ is first bounded from above by $F_\mathcal{N}$, which is itself -subsequently bounded by $V$. While the latter part is general and can be apply +%Recall that, since $L_\mathcal{N}$ is a fractional relaxation of $V$, +%\begin{displaymath} +% OPT(V,\mathcal{N},B) \leq OPT(L_\mathcal{N}, B). +%\end{displaymath} +%However, for the purpose of proving theorem~\ref{thm:main}, we need to bound +%$L_\mathcal{N}$ from above (up to a multiplicative factor) by $V$. +Toprove Lemma~\ref{lemma:relaxation}, we use the +\emph{pipage rounding} method of Ageev and Sviridenko~\cite{pipage}, where +$L_\mathcal{N}$ is first bounded from above by the multi-linear relaxation $F_\mathcal{N}$, which is itself +subsequently bounded by $V$. While the latter part is general and can be applied to the multi-linear extension of any submodular function, the former part is -specific to our choice for the function $L_\mathcal{N}$ and is our main -technical contribution (lemma~\ref{lemma:relaxation-ratio}). +specific to our choice for the function $L_\mathcal{N}$. %and is our main technical contribution (lemma~\ref{lemma:relaxation-ratio}). -First we prove a variant of the $\varepsilon$-convexity of the multi-linear +We first prove a variant of the $\varepsilon$-convexity of the multi-linear extension \eqref{eq:multilinear} introduced by Ageev and Sviridenko -\cite{pipage} which allows to trade a fractional component for another until +\cite{pipage} which allows to trade one fractional component of the solution for another until one of them becomes integral, without loosing any value. This property is also -referred to in the literature as \emph{cross-convexity} (see \emph{e.g.} -\cite{dughmi}). Formally, this property states that if we define: +referred to in the literature as \emph{cross-convexity} (see, \emph{e.g.}, +\cite{dughmi}). Formally, %this property states that +if we define: \begin{displaymath} \tilde{F}_\lambda(\varepsilon) \defeq F_\mathcal{N}\big(\lambda + \varepsilon(e_i - e_j)\big) @@ -367,30 +366,27 @@ $\reals^{n}$, then $\tilde{F}$ is convex. Hence its maximum over the interval: is attained at one of the boundaries of $I_\lambda$ for which one of the $i$-th or the $j$-th component of $\lambda$ becomes integral. -The lemma below proves that we can similarly trade one fractional component for +The lemma below proves that we can similarly trade a fractional component for an other until one of them becomes integral \emph{while maintaining the feasibility of the point at which $F_\mathcal{N}$ is evaluated}. Here, by feasibility of -a point $\lambda$, we mean that it satisfies the budget constraint $\sum_{1\leq -i\leq n} \lambda_i c_i \leq B$. - +a point $\lambda$, we mean that it satisfies the budget constraint $\sum_{i=1}^n \lambda_i c_i \leq B$. \begin{lemma}[Rounding]\label{lemma:rounding} For any feasible $\lambda\in[0,1]^{n}$, there exists a feasible - $\bar{\lambda}\in[0,1]^{n}$ such that at most one of its component is - fractional, that is, lies in $(0,1)$ and: - \begin{displaymath} - F_{\mathcal{N}}(\lambda)\leq F_{\mathcal{N}}(\bar{\lambda}) - \end{displaymath} + $\bar{\lambda}\in[0,1]^{n}$ such that at most one of its components is + fractional %, that is, lies in $(0,1)$ and: + and + %\begin{displaymath} + $ F_{\mathcal{N}}(\lambda)\leq F_{\mathcal{N}}(\bar{\lambda})$. + %\end{displaymath} \end{lemma} - \begin{proof} - We give a rounding procedure which given a feasible $\lambda$ with at least - two fractional components, returns some $\lambda'$ with one less fractional - component, feasible, and such that: + We give a rounding procedure which, given a feasible $\lambda$ with at least + two fractional components, returns some feasible $\lambda'$ with one less fractional + component such that: \begin{displaymath} F_\mathcal{N}(\lambda) \leq F_\mathcal{N}(\lambda') \end{displaymath} Applying this procedure recursively yields the lemma's result. - Let us consider such a feasible $\lambda$. Let $i$ and $j$ be two fractional components of $\lambda$ and let us define the following function: @@ -399,22 +395,20 @@ i\leq n} \lambda_i c_i \leq B$. \quad\textrm{where} \quad \lambda_\varepsilon = \lambda + \varepsilon\left(e_i-\frac{c_i}{c_j}e_j\right) \end{displaymath} - It is easy to see that if $\lambda$ is feasible, then: \begin{multline}\label{eq:convex-interval} \forall\varepsilon\in\Big[\max\Big(-\lambda_i,(\lambda_j-1)\frac{c_j}{c_i}\Big), \min\Big(1-\lambda_i, \lambda_j \frac{c_j}{c_i}\Big)\Big],\;\\ \lambda_\varepsilon\;\;\textrm{is feasible} \end{multline} - - Furthermore, the function $F_\lambda$ is convex, indeed: + Furthermore, the function $F_\lambda$ is convex; indeed: \begin{align*} F_\lambda(\varepsilon) & = \mathbb{E}_{S'\sim P_{\mathcal{N}\setminus\{i,j\}}^\lambda(S')}\Big[ (\lambda_i+\varepsilon)\Big(\lambda_j-\varepsilon\frac{c_i}{c_j}\Big)V(S'\cup\{i,j\})\\ & + (\lambda_i+\varepsilon)\Big(1-\lambda_j+\varepsilon\frac{c_i}{c_j}\Big)V(S'\cup\{i\})\\ & + (1-\lambda_i-\varepsilon)\Big(\lambda_j-\varepsilon\frac{c_i}{c_j}\Big)V(S'\cup\{j\})\\ - & + (1-\lambda_i-\varepsilon)\Big(1-\lambda_j+\varepsilon\frac{c_i}{c_j}\Big)V(S')\Big]\\ + & + (1-\lambda_i-\varepsilon)\Big(1-\lambda_j+\varepsilon\frac{c_i}{c_j}\Big)V(S')\Big] \end{align*} Thus, $F_\lambda$ is a degree 2 polynomial whose dominant coefficient is: \begin{multline*} @@ -430,15 +424,14 @@ i\leq n} \lambda_i c_i \leq B$. \end{proof} \begin{lemma}\label{lemma:relaxation-ratio} - The following inequality holds: - \begin{displaymath} - \forall\lambda\in[0,1]^{n},\; - \frac{1}{2} + % The following inequality holds: +For all $\lambda\in[0,1]^{n},$ + %\begin{displaymath} + $ \frac{1}{2} \,L_\mathcal{N}(\lambda)\leq - F_\mathcal{N}(\lambda)\leq L_{\mathcal{N}}(\lambda) - \end{displaymath} + F_\mathcal{N}(\lambda)\leq L_{\mathcal{N}}(\lambda)$. + %\end{displaymath} \end{lemma} - \begin{proof} We will prove that $\frac{1}{2}$ is a lower bound of the ratio $\partial_i F_\mathcal{N}(\lambda)/\partial_i L_\mathcal{N}(\lambda)$. Where |
