%\subsection{Truthful, Constant Approximation Mechanism} In this section we present a mechanism for \EDP. Previous works on maximizing submodular functions \cite{nemhauser, sviridenko-submodular} and designing auction mechanisms for submodular utility functions \cite{singer-mechanisms, chen, singer-influence} rely on a greedy heuristic. In this heuristic, elements are added to the solution set according to the following greedy selection rule: assume that you have already selected a set $S$, then the next element to be included in the solution set is: \begin{displaymath} i = \argmax_{j\in\mathcal{N}\setminus S}\frac{V(S\cup\{i\}) - V(S)}{c_i} \end{displaymath} This is the generalization of the \emph{value-per-cost} ratio used in greedy heuristic for knapsack problems. However, note that for general submodular functions, the value of an element depends on the set to which it is added. Unfortunately, even for the non-strategic case, the greedy heuristic gives an unbounded approximation ratio. Let us introduce $i^* = \argmax_{i\in\mathcal{N}} V(i)$, the element of maximum value (as a singleton set). It has been noted by Khuller et al. \cite{khuller} that for the maximum coverage problem, taking the maximum between the greedy solution and $V(i^*$) gives a $\frac{2e}{e-1}$ approximation ratio. In the general case, we have the following result from Singer \cite{singer-influence} which follows from Chen et al. \cite{chen}: \begin{lemma}[Singer \cite{singer-influence}]\label{lemma:greedy-bound} Let $S_G$ be the set computed by the greedy heuristic and let us define $i^*$: \begin{displaymath} i^* = \argmax_{i\in\mathcal{N}} V(i) \end{displaymath} then the following inequality holds: \begin{displaymath} OPT(V,\mathcal{N},B) \leq \frac{e}{e-1}\big( 3 V(S_G) + 2 V(i^*)\big) \end{displaymath} \end{lemma} Hence, taking the maximum between $V(S_G)$ and $V(i^*)$ yields an approximation ratio of $\frac{5e}{e-1}$. However, Singer \cite{singer-influence} notes that this approach breaks incentive compatibility and therefore cannot be directly applied to the strategic case. Indeed, assume that we are in a situation where the mechanism has allocated to the greedy set ($V(S_G) \geq V(i^*)$). If an agent $i$ from the greedy set reduces her cost, it could happen that $V(S_G)$ becomes smaller than $V(i^*)$. In this case the mechanism will allocate to $i^*$ and $i$ will be out of the allocated set. This breaks the monotonicity of the allocation function. The way this issue has been addressed thus far \cite{chen, singer-influence}, is to introduce a third quantity: if $V(i^*)$ is larger than this quantity, the mechanism allocates to $i^*$, otherwise it allocates to the greedy set $S_G$. This quantity must be provably close to $V(S_G)$, to keep a bounded approximation ratio, while maintaining the monotonicity of the allocation algorithm. Furthermore, it must be computable in polynomial time to keep an overall polynomial complexity for the allocation algorithm. If the underlying non-strategic optimization program \eqref{eq:non-strategic} can be solved in polynomial time, Chen et al. \cite{chen} prove that allocating to $i^*$ when $V(i^*) \geq C\cdot OPT(V,\mathcal{N}\setminus\{i^*\}$ (for some constant $C$) and to $S_G$ otherwise yields a 8.34 approximation mechanism. For specific problems, Chen et al. \cite{chen} for knapsack on one hand, Singer \cite{singer-influence} for maximum coverage on the other hand, instead compare $V(i^*)$ to $OPT(R_{\mathcal{N}\setminus\{i\}}, B)$, where $R_\mathcal{N}$ denotes a fractional relaxation of the function $V$ over the set $\mathcal{N}$. We say that $R_\mathcal{N}$ is a fractional relaxation of $V$ over the set $\mathcal{N}$, if (a) $R_\mathcal{N}$ is a function defined on the hypercube $[0, 1]^{|\mathcal{N}|}$ and (b) for all $S\subseteq\mathcal{N}$, if $\mathbf{1}_S$ denotes the indicator vector of $S$ we have $R_\mathcal{N}(\mathbf{1}_S) = V(S)$. The optimization program \eqref{eq:non-strategic} extends naturally to relaxations: \begin{displaymath} OPT(R_\mathcal{N}, B) = \argmax_{\lambda\in[0, 1]^{|\mathcal{N}|}} \left\{R_\mathcal{N}(\lambda) \mid \sum_{i=1}^{|\mathcal{N}|} \lambda_i c_i \leq B\right\} \end{displaymath} A relaxation which is commonly used, due to its well-behaved properties in the context of maximizing submodular functions, is the \emph{multi-linear} extension: \begin{equation}\label{eq:multilinear} F_\mathcal{N}^V(\lambda) = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[V(S)\right] = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) V(S) \end{equation} $P_\mathcal{N}^\lambda(S)$ is the probability of choosing the set $S$ if we decide for each element in $\mathcal{N}$ to pick it with probability $\lambda_i$ or to reject it with probability $1-\lambda_i$: \begin{displaymath} P_\mathcal{N}^\lambda(S) = \prod_{i\in S} \lambda_i \prod_{i\in\mathcal{N}\setminus S} 1 - \lambda_i \end{displaymath} For knapsack, Chen et al. \cite{chen} directly use the multi-linear extension as the relaxation to compare $V(i^*)$ to. For maximum coverage however, the optimal value of the multi-linear extension cannot be computed in polynomial time. Thus, Singer \cite{singer-influence} introduces a second relaxation of the value function which can be proven to be close to the multi-linear extension, by using the \emph{pipage rounding} method from Ageev and Sviridenko \cite{pipage}. Here, following these ideas, we introduce another relaxation of the value function. Note that in our case, the multi-linear extension can be written: \begin{displaymath} F_\mathcal{N}(\lambda) = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[\log\det A(S) \right] \;\text{with}\; A(S) = I_d + \sum_{i\in S} x_i\T{x_i} \end{displaymath} Our relaxation follows naturally by swapping the expectation and the $\log\det$ in the above formula: \begin{align}\label{eq:concave} L_\mathcal{N}(\lambda) & \defeq \log\det\left(\mathbb{E}_{S\sim P_\mathcal{N}^\lambda}[A(S)]\right)\notag \\ &= \log\det\left(I_d + \sum_{i\in\mathcal{N}} \lambda_i x_i\T{x_i}\right) \end{align} This function is well-known to be concave and even self-concordant (see \emph{e.g.} \cite{boyd2004convex}). In this case, the analysis of Newton's method for self-concordant functions in \cite{boyd2004convex}, shows that it is possible to find the maxmimum of $L_\mathcal{N}$ to any precision $\varepsilon$ in a number of iterations $O(\log\log\varepsilon^{-1})$. The main challenge will be to prove that $OPT(L_\mathcal{N}, B)$ is close to $V(S_G)$. To do so, our main technical contribution is to prove that $L_\mathcal{N}$ has a bounded approximation ratio to the value function $V$ (lemma~\ref{lemma:relaxation}). The mechanism for \EDP{} is presented in algorithm \ref{mechanism}. \begin{algorithm} \caption{Mechanism for \EDP{}}\label{mechanism} \begin{algorithmic}[1] \State $i^* \gets \argmax_{j\in\mathcal{N}}V(j)$ \State $x^* \gets \argmax_{x\in[0,1]^{|\mathcal{N}|}} \{L_{\mathcal{N}\setminus\{i^*\}}(x) \mid c(x)\leq B\}$ \Statex \If{$L(x^*) < CV(i^*)$} \State \textbf{return} $\{i^*\}$ \Else \State $i \gets \argmax_{1\leq j\leq n}\frac{V(j)}{c_j}$ \State $S_G \gets \emptyset$ \While{$c_i\leq \frac{B}{2}\frac{V(S_G\cup\{i\})-V(S_G)}{V(S_G\cup\{i\})}$} \State $S_G \gets S_G\cup\{i\}$ \State $i \gets \argmax_{j\in\mathcal{N}\setminus S_G} \frac{V(S_G\cup\{j\})-V(S_G)}{c_j}$ \EndWhile \State \textbf{return} $S_G$ \EndIf \end{algorithmic} \end{algorithm} We can now state the main result of this section: \begin{theorem}\label{thm:main} The mechanism described in algorithm~\ref{mechanism} is truthful, and budget feasible. Furthermore, for any $\varepsilon>0$, the mechanism has a complexity $O\left(\text{poly}(|\mathcal{N}|, d, \log\log \varepsilon^{-1})\right)$ and returns a set $S^*$ such that: \begin{align*} OPT(V,\mathcal{N}, B) & \leq \frac{14e-3 + \sqrt{160e^2-48e + 9}}{2(e-1)} V(S^*)+ \varepsilon\\ & \simeq 19.68V(S^*) + \varepsilon \end{align*} \end{theorem} %\stratis{Add lowerbound here too.} In addition, we prove the following lower bound. \begin{theorem}\label{thm:lowerbound} There is no $2$-approximate, truthful, budget feasible, individionally rational mechanism for EDP. \end{theorem} \stratis{move the proof as appropriate} \begin{proof} Suppose, for contradiction, that such a mechanism exists. Consider two experiments with dimension $d=2$, such that $x_1 = e_1=[1 ,0]$, $x_2=e_2=[0,1]$ and $c_1=c_2=B/2+\epsilon$. Then, one of the two experiments, say, $x_1$, must be in the set selected by the mechanism, otherwise the ratio is unbounded, a contradiction. If $x_1$ lowers its value to $B/2-\epsilon$, by monotonicity it remains in the solution; by threshold payment, it is paid at least $B/2+\epsilon$. So $x_2$ is not included in the solution by budget feasibility and individual rationality: hence, the selected set attains a value $\log2$, while the optimal value is $2\log 2$. \end{proof} \subsection{Proof of theorem~\ref{thm:main}} The proof of the properties of the mechanism is broken down into lemmas. Monotonicity and budget feasibility follows from the analysis of Chen et al. \cite{chen}. The proof of the approximation ratio is done in lemma~\ref{lemma:approx} and uses a bound on our concave relaxation $L_\mathcal{N}$ (lemma~\ref{lemma:relaxation}) which is our main technical contribution. The proof of this lemma is done in a dedicated section (\ref{sec:relaxation}). \begin{lemma}\label{lemma:monotone} The mechanism is monotone. \end{lemma} \begin{proof} Suppose, for contradiction, that there exists an agent $i$ that has been selected by the mechanism and that would not have been selected had she reported a cost $c_i'\leq c_i$ (all the other costs staying the same). If $i\neq i^*$ and $i$ has been selected, then we are in the case where $L_\mathcal{N}(x^*) \geq C V(i^*)$ and $i$ was included in the result set by the greedy part of the mechanism. By reporting a cost $c_i'\leq c_i$, using the submodularity of $V$, we see that $i$ will satisfy the greedy selection rule: \begin{displaymath} i = \argmax_{j\in\mathcal{N}\setminus S} \frac{V(S\cup\{j\}) - V(S)}{c_j} \end{displaymath} in an earlier iteration of the greedy heuristic. Let us denote by $S_i$ (resp. $S_i'$) the set to which $i$ is added when reporting cost $c_i$ (resp. $c_i'$). We have $S_i'\subseteq S_i$. Moreover: \begin{align*} c_i' & \leq c_i \leq \frac{B}{2}\frac{V(S_i\cup\{i\})-V(S_i)}{V(S_i\cup\{i\})}\\ & \leq \frac{B}{2}\frac{V(S_i'\cup\{i\})-V(S_i')}{V(S_i'\cup\{i\})} \end{align*} Hence $i$ will still be included in the result set. If $i = i^*$, $i$ is included iff $L_\mathcal{N}(x^*) \leq C V(i^*)$. Reporting $c_i'$ instead of $c_i$ does not change the value $V(i^*)$ nor $L_\mathcal{N}(x^*)$ (which is computed over $\mathcal{N}\setminus\{i^*\}$). Thus $i$ is still included by reporting a different cost. \end{proof} \begin{lemma}\label{lemma:budget-feasibility} The mechanism is budget feasible. \end{lemma} \begin{proof} Let us denote by $S_G$ the set selected by the greedy heuristic in the mechanism of Algorithm~\ref{mechanism}. Let $i\in S_G$, let us also denote by $S_i$ the solution set that was selected by the greedy heuristic before $i$ was added. We use the following result from Chen et al. \cite{chen}, which bounds the reported cost of an agent selected by the greedy heuristic, and holds for any submodular function $V$: \begin{equation}\label{eq:budget} c_i \leq \frac{V(S_i\cup\{i\}) - V(S)}{V(S_G)} B \end{equation} Assume now that our mechanism selects point $i^*$. In this case, his payment his $B$ and the mechanism is budget-feasible. Otherwise, the mechanism selects the set $S_G$. In this case, \eqref{eq:budget} shows that the threshold payment of user $i$ is bounded by: \begin{displaymath} \frac{V(S_i\cup\{i\}) - V(S_i)}{V(S_G)} B \end{displaymath} Hence, the total payment is bounded by: \begin{displaymath} \sum_{i\in S_M} \frac{V(S_i\cup\{i\}) - V(S_i)}{V(S_G)} B \leq B\qed \end{displaymath} \end{proof} \begin{lemma}\label{lemma:complexity} For any $\varepsilon > 0$, the complexity of algorithm~\ref{mechanism} is $O(\text{poly}(|\mathcal{N}|, d, \log\log \varepsilon^{-1}))$. \end{lemma} \begin{proof} The value function $V$ \eqref{modified} can be computed in time $O(\text{poly}(|\mathcal{N}|, d))$ and the mechanism only involves a linear number of queries to the function $V$. The function $\log\det$ is concave and self-concordant (see \cite{boyd2004convex}), so for any $\varepsilon$, its maximum can be find to a precision $\varepsilon$ in a number of iterations of Newton's method $O(\log\log\varepsilon^{-1})$. Each iteration of Newton's method can be done in time $O(\text{poly}(|\mathcal{N}|, d))$. Thus, line 2 of algorithm~\ref{mechanism}, can be computed in time $O(\text{poly}(|\mathcal{N}|, d, \log\log \varepsilon^{-1}))$. \end{proof} Finally, we prove the approximation ratio of the mechanism. We use the following lemma, which gives an approximation ratio of the function $L_\mathcal{N}$. Its proof is our main technical contribution and is done in section \ref{sec:relaxation}. \begin{lemma}\label{lemma:relaxation} We have: \begin{displaymath} OPT(L_\mathcal{N}, B) \leq 4 OPT(V,\mathcal{N},B) + 2\max_{i\in\mathcal{N}}V(i) \end{displaymath} \end{lemma} \begin{lemma}\label{lemma:approx} %C_{\textrm{max}} = \max\left(1+C,\frac{3e}{e-1}\left( 1 + \frac{8e}{C %(e-1) -10e +2}\right)\right) For any $\varepsilon > 0$, if $L(x^*)$ has been computed to a precision $\varepsilon$, then the set $S^*$ allocated by the mechanism is such that: \begin{displaymath} OPT(V,\mathcal{N}, B) \leq \frac{14e-3 + \sqrt{160e^2-48e + 9}}{2(e-1)} V(S^*)+ \varepsilon \end{displaymath} \end{lemma} \begin{proof} We assume that on line 2 of algorithm~\ref{mechanism}, a quantity $\tilde{L}$ such that $\tilde{L}-\varepsilon\leq L(x^*) \leq \tilde{L}+\varepsilon$ has been computed (lemma~\ref{lemma:complexity} states that this can be done in a time within our complexity guarantee). If the condition on line 3 of the algorithm holds, then: \begin{displaymath} V(i^*) \geq \frac{1}{C}L(x^*)-\frac{\varepsilon}{C} \geq \frac{1}{C}OPT(V,\mathcal{N}\setminus\{i\}, B) \end{displaymath} But: \begin{displaymath} OPT(V,\mathcal{N},B) \leq OPT(V,\mathcal{N}\setminus\{i\}, B) + V(i^*) \end{displaymath} Hence: \begin{equation}\label{eq:bound1} OPT(V,\mathcal{N}, B)\leq (1+C)V(i^*) + \varepsilon \end{equation} If the condition of the algorithm does not hold, by applying lemmas \ref{lemma:relaxation} and \ref{lemma:greedy-bound}: \begin{align*} V(i^*) & \leq \frac{1}{C}L(x^*) + \frac{\varepsilon}{C}\leq \frac{1}{C} \big(4 OPT(V,\mathcal{N}, B) + 2 V(i^*)\big) + \frac{\varepsilon}{C}\\ & \leq \frac{1}{C}\left(\frac{4e}{e-1}\big(3 V(S_G) + 2 V(i^*)\big) + 2 V(i^*)\right) + \frac{\varepsilon}{C} \end{align*} Thus: \begin{align*} V(i^*) \leq \frac{12e}{C(e-1)- 10e + 2} V(S_G) + \frac{(e-1)\varepsilon}{C(e-1)- 10e + 2} \end{align*} Finally, using again lemma~\ref{lemma:greedy-bound}, we get: \begin{multline}\label{eq:bound2} OPT(V, \mathcal{N}, B) \leq \frac{3e}{e-1}\left( 1 + \frac{8e}{C (e-1) -10e +2}\right) V(S_G)\\ +\frac{2e\varepsilon}{C(e-1)- 10e + 2} \end{multline} Looking at \eqref{eq:bound1} and \eqref{eq:bound2}, we see that the optimal value for $C$ is: \begin{displaymath} C^* = \argmin_C \max\left(1+C,\frac{3e}{e-1}\left( 1 + \frac{8e}{C (e-1) -10e +2} \right)\right) \end{displaymath} This equation has two solutions. Only one of those is such that: \begin{displaymath} C(e-1) -10e +2 \geq 0 \end{displaymath} which is needed in the above derivation. This solution is: \begin{displaymath} C^* = \frac{12e-1 + \sqrt{160e^2-48e + 9}}{2(e-1)} \end{displaymath} For this solution we have: \begin{displaymath} \frac{2e\varepsilon}{C(e-1)- 10e + 2}\leq 1 \end{displaymath} Plugging the expression of $C^*$ in \eqref{eq:bound1} and \eqref{eq:bound2} gives the approximation ratio in the lemma's statement. \end{proof} \subsection{Proof of lemma \ref{lemma:relaxation}}\label{sec:relaxation} Since $L_\mathcal{N}$ is a relaxation of the value function $V$, we have: \begin{displaymath} OPT(V,\mathcal{N},B) \leq OPT(L_\mathcal{N}, B) \end{displaymath} However, for the purpose of proving theorem~\ref{thm:main}, we need to bound $L_\mathcal{N}$ from above (up to a multiplicative factor) by $V$. We use the \emph{pipage rounding} method from Ageev and Sviridenko \cite{pipage}, where $L_\mathcal{N}$ is first bounded from above by $F_\mathcal{N}$, which is itself subsequently bounded by $V$. While the latter part is general and can be apply to the multi-linear extension of any submodular function, the former part is specific to our choice for the function $L_\mathcal{N}$ and is our main technical contribution (lemma~\ref{lemma:relaxation-ratio}). First we prove a variant of the $\varepsilon$-convexity of the multi-linear extension \eqref{eq:multilinear} introduced by Ageev and Sviridenko \cite{pipage} which allows to trade a fractional component for another until one of them becomes integral, without loosing any value. This property is also referred to in the literature as \emph{cross-convexity} (see \emph{e.g.} \cite{dughmi}). Formally, this property states that if we define: \begin{displaymath} \tilde{F}_\lambda(\varepsilon) \defeq F_\mathcal{N}\big(\lambda + \varepsilon(e_i - e_j)\big) \end{displaymath} where $e_i$ and $e_j$ are two vectors of the standard basis of $\reals^{|\mathcal{N}|}$, then $\tilde{F}$ is convex. Hence its maximum over the interval: \begin{displaymath} I_\lambda = \Big[\max(-\lambda_i,\lambda_j-1), \min(1-\lambda_i, \lambda_j)\Big] \end{displaymath} is attained at one of the boundaries of $I_\lambda$ for which one of the $i$-th or the $j$-th component of $\lambda$ becomes integral. The lemma below proves that we can similarly trade one fractional component for an other until one of them becomes integral \emph{while maintaining the feasibility of the point at which $F_\mathcal{N}$ is evaluated}. Here, by feasibility of a point $\lambda$, we mean that it satisfies the budget constraint $\sum_{1\leq i\leq |\mathcal{N}|} \lambda_i c_i \leq B$. \begin{lemma}[Rounding]\label{lemma:rounding} For any feasible $\lambda\in[0,1]^{|\mathcal{N}|}$, there exists a feasible $\bar{\lambda}\in[0,1]^{|\mathcal{N}|}$ such that at most one of its component is fractional, that is, lies in $(0,1)$ and: \begin{displaymath} F_{\mathcal{N}}(\lambda)\leq F_{\mathcal{N}}(\bar{\lambda}) \end{displaymath} \end{lemma} \begin{proof} We give a rounding procedure which given a feasible $\lambda$ with at least two fractional components, returns some $\lambda'$ with one less fractional component, feasible, and such that: \begin{displaymath} F_\mathcal{N}(\lambda) \leq F_\mathcal{N}(\lambda') \end{displaymath} Applying this procedure recursively yields the lemma's result. Let us consider such a feasible $\lambda$. Let $i$ and $j$ be two fractional components of $\lambda$ and let us define the following function: \begin{displaymath} F_\lambda(\varepsilon) = F(\lambda_\varepsilon) \quad\textrm{where} \quad \lambda_\varepsilon = \lambda + \varepsilon\left(e_i-\frac{c_i}{c_j}e_j\right) \end{displaymath} It is easy to see that if $\lambda$ is feasible, then: \begin{multline}\label{eq:convex-interval} \forall\varepsilon\in\Big[\max\Big(-\lambda_i,(\lambda_j-1)\frac{c_j}{c_i}\Big), \min\Big(1-\lambda_i, \lambda_j \frac{c_j}{c_i}\Big)\Big],\;\\ \lambda_\varepsilon\;\;\textrm{is feasible} \end{multline} Furthermore, the function $F_\lambda$ is convex, indeed: \begin{align*} F_\lambda(\varepsilon) & = \mathbb{E}_{S'\sim P_{\mathcal{N}\setminus\{i,j\}}^\lambda(S')}\Big[ (\lambda_i+\varepsilon)\Big(\lambda_j-\varepsilon\frac{c_i}{c_j}\Big)V(S'\cup\{i,j\})\\ & + (\lambda_i+\varepsilon)\Big(1-\lambda_j+\varepsilon\frac{c_i}{c_j}\Big)V(S'\cup\{i\})\\ & + (1-\lambda_i-\varepsilon)\Big(\lambda_j-\varepsilon\frac{c_i}{c_j}\Big)V(S'\cup\{j\})\\ & + (1-\lambda_i-\varepsilon)\Big(1-\lambda_j+\varepsilon\frac{c_i}{c_j}\Big)V(S')\Big]\\ \end{align*} Thus, $F_\lambda$ is a degree 2 polynomial whose dominant coefficient is: \begin{multline*} \frac{c_i}{c_j}\mathbb{E}_{S'\sim P_{\mathcal{N}\setminus\{i,j\}}^\lambda(S')}\Big[ V(S'\cup\{i\})+V(S'\cup\{i\})\\ -V(S'\cup\{i,j\})-V(S')\Big] \end{multline*} which is positive by submodularity of $V$. Hence, the maximum of $F_\lambda$ over the interval given in \eqref{eq:convex-interval} is attained at one of its limit, at which either the $i$-th or $j$-th component of $\lambda_\varepsilon$ becomes integral. \end{proof} \begin{lemma}\label{lemma:relaxation-ratio} The following inequality holds: \begin{displaymath} \forall\lambda\in[0,1]^{|\mathcal{N}|},\; \frac{1}{2} \,L_\mathcal{N}(\lambda)\leq F_\mathcal{N}(\lambda)\leq L_{\mathcal{N}}(\lambda) \end{displaymath} \end{lemma} \begin{proof} We will prove that $\frac{1}{2}$ is a lower bound of the ratio $\partial_i F_\mathcal{N}(\lambda)/\partial_i L_\mathcal{N}(\lambda)$. Where $\partial_i\, \cdot$ denotes the derivative of a function with respect to the $i$-th variable. This will be enough to conclude, by observing that at $\lambda = 0$, one can write: \begin{displaymath} \frac{F_\mathcal{N}(\lambda)}{L_\mathcal{N}(\lambda)} \sim_{\lambda\rightarrow 0} \frac{\sum_{i\in \mathcal{N}}\lambda_i\partial_i F_\mathcal{N}(0)} {\sum_{i\in\mathcal{N}}\lambda_i\partial_i L_\mathcal{N}(0)} \geq \frac{1}{2} \end{displaymath} If the minimum is attained at a point interior to the hypercube, then it is a critical point of the ratio $F_\mathcal{N}(\lambda)/L_\mathcal{N}(\lambda)$. Such a critical point is characterized by: \begin{equation}\label{eq:lhopital} \frac{F_\mathcal{N}(\lambda)}{L_\mathcal{N}(\lambda)} = \frac{\partial_i F_\mathcal{N}(\lambda)}{\partial_i L_\mathcal{N}(\lambda)} \geq \frac{1}{2} \end{equation} Finally, if the minimum is attained on a face of the hypercube (a face is defined as a subset of the hypercube where one of the variable is fixed to 0 or 1), without loss of generality, we can assume that the minimum is attained on the face where the $|\mathcal{N}|$-th variable has been fixed to 0 or 1. Then, either the minimum is attained at a point interior to the face or on a boundary of the face. In the first case, relation \eqref{eq:lhopital} still characterizes the minimum for $i< |\mathcal{N}|$. In the second case, by repeating the argument again by induction, we see that all is left to do is to show that the bound holds for the vertices of the cube (the faces of dimension 1). The vertices are exactly the binary points, for which we know that both relaxations are equal to the value function $V$. Hence, the ratio is equal to 1 on the vertices. Let us start by computing the derivatives of $F_\mathcal{N}$ and $L_\mathcal{N}$ with respect to the $i$-th component. For $F$, it suffices to look at the derivative of $P_\mathcal{N}^\lambda(S)$: \begin{displaymath} \partial_i P_\mathcal{N}^\lambda(S) = \left\{ \begin{aligned} & P_{\mathcal{N}\setminus\{i\}}^\lambda(S\setminus\{i\})\;\textrm{if}\; i\in S \\ & - P_{\mathcal{N}\setminus\{i\}}^\lambda(S)\;\textrm{if}\; i\in \mathcal{N}\setminus S \\ \end{aligned}\right. \end{displaymath} Hence: \begin{multline*} \partial_i F_\mathcal{N} = \sum_{\substack{S\subseteq\mathcal{N}\\ i\in S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S\setminus\{i\})V(S)\\ - \sum_{\substack{S\subseteq\mathcal{N}\\ i\in \mathcal{N}\setminus S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S)V(S) \end{multline*} Now, using that every $S$ such that $i\in S$ can be uniquely written as $S'\cup\{i\}$, we can write: \begin{multline*} \partial_i F_\mathcal{N} = \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S)V(S\cup\{i\})\\ - \sum_{\substack{S\subseteq\mathcal{N}\\ i\in \mathcal{N}\setminus S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S)V(S) \end{multline*} Finally, by using the expression for the marginal contribution of $i$ to $S$: \begin{displaymath} \partial_i F_\mathcal{N}(\lambda) = \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S) \log\Big(1 + \T{x_i}A(S)^{-1}x_i\Big) \end{displaymath} The computation of the derivative of $L_\mathcal{N}$ uses standard matrix calculus and gives: \begin{displaymath} \partial_i L_\mathcal{N}(\lambda) = \T{x_i}\tilde{A}(\lambda)^{-1}x_i \end{displaymath} Using the following inequalities: \begin{gather*} \forall S\subseteq\mathcal{N}\setminus\{i\},\quad P_{\mathcal{N}\setminus\{i\}}^\lambda(S)\geq P_{\mathcal{N}\setminus\{i\}}^\lambda(S\cup\{i\})\\ \forall S\subseteq\mathcal{N},\quad P_{\mathcal{N}\setminus\{i\}}^\lambda(S) \geq P_\mathcal{N}^\lambda(S)\\ \forall S\subseteq\mathcal{N},\quad A(S)^{-1} \geq A(S\cup\{i\})^{-1}\\ \end{gather*} we get: \begin{align*} \partial_i F_\mathcal{N}(\lambda) & = \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_\mathcal{N}^\lambda(S) \log\Big(1 + \T{x_i}A(S)^{-1}x_i\Big)\\ & \geq \frac{1}{2} \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S) \log\Big(1 + \T{x_i}A(S)^{-1}x_i\Big)\\ &\hspace{-3.5em}+\frac{1}{2} \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S\cup\{i\}) \log\Big(1 + \T{x_i}A(S\cup\{i\})^{-1}x_i\Big)\\ &\geq \frac{1}{2} \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) \log\Big(1 + \T{x_i}A(S)^{-1}x_i\Big)\\ \end{align*} Using that $A(S)\geq I_d$ we get that: \begin{displaymath} \T{x_i}A(S)^{-1}x_i \leq \norm{x_i}_2^2 = 1 \end{displaymath} Moreover: \begin{displaymath} \forall x\leq 1,\; \log(1+x)\geq x \end{displaymath} Hence: \begin{displaymath} \partial_i F_\mathcal{N}(\lambda) \geq \frac{1}{2} \T{x_i}\bigg(\sum_{S\subseteq\mathcal{N}}P_\mathcal{N}^\lambda(S)A(S)^{-1}\bigg)x_i \end{displaymath} Finally, using that the inverse is a matrix convex function over symmetric positive definite matrices: \begin{align*} \partial_i F_\mathcal{N}(\lambda) &\geq \frac{1}{2} \T{x_i}\bigg(\sum_{S\subseteq\mathcal{N}}P_\mathcal{N}^\lambda(S)A(S)\bigg)^{-1}x_i\\ & \geq \frac{1}{2} \partial_i L_\mathcal{N}(\lambda)\qed \end{align*} \end{proof} \begin{proof} Let us consider a feasible point $\lambda^*\in[0,1]^{|\mathcal{N}|}$ such that $L_\mathcal{N}(\lambda^*) = OPT(L_\mathcal{N}, B)$. By applying lemma~\ref{lemma:relaxation-ratio} and lemma~\ref{lemma:rounding} we get a feasible point $\bar{\lambda}$ with at most one fractional component such that: \begin{equation}\label{eq:e1} L_\mathcal{N}(\lambda^*) \leq 2 F_\mathcal{N}(\bar{\lambda}) \end{equation} Let $\lambda_i$ denote the fractional component of $\bar{\lambda}$ and $S$ denote the set whose indicator vector is $\bar{\lambda} - \lambda_i e_i$. Using the fact that $F_\mathcal{N}$ is linear with respect to the $i$-th component and is a relaxation of the value function, we get: \begin{displaymath} F_\mathcal{N}(\bar{\lambda}) = V(S) +\lambda_i V(S\cup\{i\}) \end{displaymath} Using the submodularity of $V$: \begin{displaymath} F_\mathcal{N}(\bar{\lambda}) \leq 2 V(S) + V(i) \end{displaymath} Note that since $\bar{\lambda}$ is feasible, $S$ is also feasible and $V(S)\leq OPT(V,\mathcal{N}, B)$. Hence: \begin{equation}\label{eq:e2} F_\mathcal{N}(\bar{\lambda}) \leq 2 OPT(V,\mathcal{N}, B) + \max_{i\in\mathcal{N}} V(i) \end{equation} Putting \eqref{eq:e1} and \eqref{eq:e2} together gives the results. \end{proof}