In this section we present a mechanism for the problem described in section~\ref{sec:auction}. Previous works on maximizing submodular functions \cite{nemhauser, sviridenko-submodular} and desiging auction mechanisms for submodular utility functions \cite{singer-mechanisms, chen, singer-influence} rely on a greedy heuristic. In this heuristic, points which maximize the \emph{marginal-contribution-per-cost} ratio are greedily added to the solution set. The \emph{marginal-contribution-per-cost} ratio of a point $i$ with cost $c_i$ to the set $S$ is defined by: \begin{displaymath} \frac{V(S\cup\{i\}) - V(S)}{c_i} \end{displaymath} This is the generalization of the \emph{value-per-cost} ratio used in greedy heuristic for knapsack problems. However note that for general submodular functions, the value of a point depends on the set of points which have already been selected. Unfortunately, even for the non-strategic case, the greedy heuristic gives an unbounded approximation ratio. It has been noted by Khuller et al. \cite{khuller} that for the maximum coverage problem, taking the maximum between the greedy solution and the point of maximum value gives a $\frac{2e}{e-1}$ approximation ratio. In the general case, lemma 3.1 from \cite{singer-influence} which follows from \cite{chen}, shows that this has an approximation ratio of $\frac{5e}{e-1}$ (see lemma~\ref{lemma:greedy-bound} below). However, Singer \cite{singer-influence} notes that this approach breaks incentive compatibility and therefore cannot be directly applied to the strategic case. Two approaches have been studied to deal with the strategic case and rely on comparing the point of maximum value to a quantity which can be proven to be not too far from the greedy solution and maintains incentive compatibility. \begin{itemize} \item In \cite{chen}, the authors suggest using $OPT(V,\mathcal{N}\setminus\{i^*\}, B)$ where $i^*$ is the point of maximum value. While this yields an approximation ratio of 8.34, in the general case, the optimal value cannot be computed in polynomial time. \item For the set coverage problem, Singer \cite{singer-influence} uses a relaxation of the value function which can be proven to have a constant approximation ratio to the value function. \end{itemize} Here, we will use a specific relaxation of the value function \eqref{vs}. Let us define the function $L_\mathcal{N}$: \begin{displaymath} \forall\lambda\in[0,1]^{|\mathcal{N}|}\,\quad L_{\mathcal{N}}(\lambda) \defeq \log\det\left(I_d + \mu\sum_{i\in\mathcal{N}} \lambda_i x_i \T{x_i}\right) \end{displaymath} Our mechanism for ridge regression is presented in Algorithm~\ref{mechanism}. \begin{algorithm} \caption{Mechanism for ridge regression}\label{mechanism} \begin{algorithmic}[1] \State $i^* \gets \argmax_{j\in\mathcal{N}}V(j)$ \State $x^* \gets \argmax_{x\in[0,1]^{|\mathcal{N}|}} \{L_{\mathcal{N}\setminus\{i^*\}}(x) \mid c(x)\leq B\}$ \Statex \If{$L(x^*) < CV(i^*)$} \State \textbf{return} $\{i^*\}$ \Else \State $i \gets \argmax_{1\leq j\leq n}\frac{V(j)}{c_j}$ \State $S \gets \emptyset$ \While{$c_i\leq \frac{B}{2}\frac{V(S\cup\{i\})-V(S)}{V(S\cup\{i\})}$} \State $S \gets S\cup\{i\}$ \State $i \gets \argmax_{j\in\mathcal{N}\setminus S} \frac{V(S\cup\{j\})-V(S)}{c_j}$ \EndWhile \State \textbf{return} $S$ \EndIf \end{algorithmic} \end{algorithm} \emph{Remarks} \begin{enumerate} \item the function $L_\mathcal{N}$ is concave (see lemma~\ref{lemma:concave}) thus the maximization step on line 2 of the mechanism can be computed in polynomial time, which proves that the mechanism overall has a polynomial complexity. \item the stopping rule in the while loop is more sophiticated than just checking against the budget constraint ($c(S) \leq B$). This is to ensure budget feasibility (see lemma~\ref{lemma:budget-feasibility}). \end{enumerate} We can now state the main result of this section: \begin{theorem} The mechanism described in Algorithm~\ref{mechanism} is truthful, individually rational, budget feasible. Furthermore, choosing: \begin{multline*} C = C^* = \frac{5e-1 + C_\mu(2e+1)}{2C_\mu(e-1)}\\ + \frac{\sqrt{C_\mu^2(1+2e)^2 + 2C_\mu(14e^2+5e+1) + (1-5e)^2}}{2C_\mu(e-1)} \end{multline*} we get an approximation ratio of: \begin{multline*} 1 + C^* = \frac{5e-1 + C_\mu(4e-1)}{2C_\mu(e-1)}\\ + \frac{\sqrt{C_\mu^2(1+2e)^2 + 2C_\mu(14e^2+5e+1) + (1-5e)^2}}{2C_\mu(e-1)} \end{multline*} where: \begin{displaymath} C_\mu = \frac{\log(1+\mu)}{2\mu} \end{displaymath} \end{theorem} The proof will consist of the claims of the theorem broken down into lemmas. Note that this is a single parameter mechanism. Hence, by using Myerson's characterization of truthful mechanisms \cite{myerson}, proving truthfulness amounts to proving the monotonicity of the mechanism: if a user is selected by the mechanism when reporting a cost $c_i$, then he is still selected when reporting another cost $c_i'\leq c_i$ provided that the remaining users do not change their costs. We prove the monotonicity of the mechanism in lemma~\ref{lemma:monotone} below. The proof is similar to the one of lemma 3.2 in \cite{singer-influence}. \begin{lemma}\label{lemma:monotone} The mechanism is monotone. \end{lemma} \begin{proof} We assume by contradiction that there exists a user $i$ that has been selected by the mechanism and that would not be selected had he reported a cost $c_i'\leq c_i$ (all the other costs staying the same). If $i\neq i^*$ and $i$ has been selected, then we are in the case where $L(x^*) \geq C V(i^*)$ and $i$ was included in the result set by the greedy part of the mechanism. By reporting a cost $c_i'\leq c_i$, using the submodularity of $V$, we see that $i$ will satisfy the greedy selection rule: \begin{displaymath} i = \argmax_{j\in\mathcal{N}\setminus S} \frac{V(S\cup\{j\}) - V(S)}{c_j} \end{displaymath} in an earlier iteration of the greedy heuristic. Let us denote by $S_i$ (resp. $S_i'$) the set to which $i$ is added when reporting cost $c_i$ (resp. $c_i'$). We have $S_i'\subseteq S_i$. Moreover: \begin{align*} c_i' & \leq c_i \leq \frac{B}{2}\frac{V(S_i\cup\{i\})-V(S_i)}{V(S_i\cup\{i\})}\\ & \leq \frac{B}{2}\frac{V(S_i'\cup\{i\})-V(S_i')}{V(S_i'\cup\{i\})} \end{align*} Hence $i$ will still be included in the result set. If $i = i^*$, $i$ is included iff $L(x^*) \leq C V(i^*)$. Reporting $c_i'$ instead of $c_i$ does not change the value $V(i^*)$ nor $L(x^*)$ (which is computed over $\mathcal{N}\setminus\{i^*\}$). Thus $i$ is still included by reporting a different cost. \end{proof} \begin{lemma}\label{lemma:budget-feasibility} The mechanism is budget feasible. \end{lemma} The proof is the same as in Chen and is given here for the sake of completeness. \begin{proof} \end{proof} The following lemma proves that the relaxation $L_\mathcal{N}$ that we are using has a bounded approximation ratio to the value function $V$. For readibility, the proof is postponed to section~\ref{sec:relaxation}. \begin{lemma}\label{lemma:relaxation} We have: \begin{displaymath} OPT(L_\mathcal{N}, B) \leq \frac{1}{C_\mu}\big(2 OPT(V,\mathcal{N},B) + \max_{i\in\mathcal{N}}V(i)\big) \end{displaymath} \end{lemma} Let us recall here the following lemma from \cite{chen} which we use in the proof of lemma~\ref{lemma:approx}. This lemma shows, as mentioned above, that taking the maximum between the greedy solution and the point of maximum value gives a $\frac{5e}{e-1}$ approximation ratio. \begin{lemma}\label{lemma:greedy-bound} The following inequality holds: \begin{displaymath} OPT(V,\mathcal{N},B) \leq \frac{5e}{e-1}\max\big( V(S_M), V(i^*)\big) \end{displaymath} \end{lemma} \begin{lemma}\label{lemma:approx} Let us denote by $S_M$ the set returned by the mechanism. Let us also write: \begin{displaymath} C_{\textrm{max}} = \max\left(1+C,\frac{e}{e-1}\left( 3 + \frac{12e}{C\cdot C_\mu(e-1) -5e +1}\right)\right) \end{displaymath} Then: \begin{displaymath} OPT(V, \mathcal{N}, B) \leq C_\text{max}\cdot V(S_M) \end{displaymath} \end{lemma} \begin{proof} If the condition on line 3 of the algorithm holds, then: \begin{displaymath} V(i^*) \geq \frac{1}{C}L(x^*) \geq \frac{1}{C}OPT(V,\mathcal{N}\setminus\{i\}, B) \end{displaymath} But: \begin{displaymath} OPT(V,\mathcal{N},B) \leq OPT(V,\mathcal{N}\setminus\{i\}, B) + V(i^*) \end{displaymath} Hence: \begin{displaymath} V(i^*) \geq \frac{1}{C+1} OPT(V,\mathcal{N}, B) \end{displaymath} If the condition of the algorithm does not hold: \begin{align*} V(i^*) & \leq \frac{1}{C}L(x^*) \leq \frac{1}{C\cdot C_\mu} \big(2 OPT(V,\mathcal{N}, B) + V(i^*)\big)\\ & \leq \frac{1}{C\cdot C_\mu}\left(\frac{2e}{e-1}\big(3 V(S_M) + 2 V(i^*)\big) + V(i^*)\right) \end{align*} Thus: \begin{align*} V(i^*) \leq \frac{6e}{C\cdot C_\mu(e-1)- 5e + 1} V(S_M) \end{align*} Finally, using again lemma~\ref{lemma:greedy-bound}, we get: \begin{displaymath} OPT(V, \mathcal{N}, B) \leq \frac{e}{e-1}\left( 3 + \frac{12e}{C\cdot C_\mu(e-1) -5e +1}\right) V(S_M) \end{displaymath} \end{proof} The optimal value for $C$ is: \begin{displaymath} C^* = \argmin_C C_{\textrm{max}} \end{displaymath} This equation has two solutions. Only one of those is such that: \begin{displaymath} C\cdot C_\mu(e-1) -5e +1 \geq 0 \end{displaymath} which is needed in the proof of the previous lemma. Computing this solution, gives the result of the theorem. \subsection{Relaxations of the value function}\label{sec:relaxation} To prove lemma~\ref{lemma:relaxation}, we use a general method called pipage rounding introduced in \cite{pipage}. This method relies on \emph{piping} two relaxations of the value function, one being the \emph{multilinear extension} introduced below, the other one being the relaxation $L_\mathcal{N}$ already introduced in our mechanism. At each position of the pipe, we show that we keep a bounded approximation ratio to the original value function. We say that $R_\mathcal{N}:[0,1]^{|\mathcal{N}|}\rightarrow\reals$ is a relaxation of the value function $V$ over $\mathcal{N}$ if it coincides with $V$ at binary points. Formally, for any $S\subseteq\mathcal{N}$, let $\mathbf{1}_S$ denote the indicator vector of $S$. $R_\mathcal{N}$ is a relaxation of $V$ over $\mathcal{N}$ iff: \begin{displaymath} \forall S\subseteq\mathcal{N},\; R_\mathcal{N}(\mathbf{1}_S) = V(S) \end{displaymath} We can extend the optimisation problem defined above to a relaxation by extending the cost function: \begin{displaymath} \forall \lambda\in[0,1]^{|\mathcal{N}|},\; c(\lambda) = \sum_{i\in\mathcal{N}}\lambda_ic_i \end{displaymath} The optimisation problem becomes: \begin{displaymath} OPT(R_\mathcal{N}, B) = \max_{\lambda\in[0,1]^{|\mathcal{N}|}}\left\{R_\mathcal{N}(\lambda)\,|\, c(\lambda)\leq B\right\} \end{displaymath} The relaxations we will consider here rely on defining a probability distribution over subsets of $\mathcal{N}$. Let $\lambda\in[0,1]^{|\mathcal{N}|}$, let us define: \begin{displaymath} P_\mathcal{N}^\lambda(S) = \prod_{i\in S}\lambda_i \prod_{i\in\mathcal{N}\setminus S}(1-\lambda_i) \end{displaymath} $P_\mathcal{N}^\lambda(S)$ is the probability of picking the set $S$ if we select a subset of $\mathcal{N}$ at random by deciding independently for each point to include it in the set with probability $\lambda_i$ (and to exclude it with probability $1-\lambda_i$). We will consider two relaxations of the value function $V$ over $\mathcal{N}$: \begin{itemize} \item the \emph{multi-linear extension} of $V$: \begin{align*} F_\mathcal{N}(\lambda) & = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\big[\log\det A(S)\big]\\ & = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) V(S)\\ & = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) \log\det A(S)\\ \end{align*} \item the \emph{concave relaxation} of $V$: \begin{align*} L_{\mathcal{N}}(\lambda) & = \log\det \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\big[A(S)\big]\\ & = \log\det\left(\sum_{S\subseteq N} P_\mathcal{N}^\lambda(S)A(S)\right)\\ & = \log\det\left(I_d + \mu\sum_{i\in\mathcal{N}} \lambda_ix_i\T{x_i}\right)\\ & \defeq \log\det \tilde{A}(\lambda) \end{align*} \end{itemize} \begin{lemma}\label{lemma:concave} The \emph{concave relaxation} $L_\mathcal{N}$ is concave\footnote{Hence this relaxation is well-named!}. \end{lemma} \begin{proof} This follows from the concavity of the $\log\det$ function over symmetric positive semi-definite matrices. More precisely, if $A$ and $B$ are two symmetric positive semi-definite matrices, then: \begin{multline*} \forall\alpha\in [0, 1],\; \log\det\big(\alpha A + (1-\alpha) B\big)\\ \geq \alpha\log\det A + (1-\alpha)\log\det B \end{multline*} \end{proof} It has been observed already that the multilinear extension presents the cross-convexity property: it is convex along any direction $e_i-e_j$ where $e_i$ and $e_j$ are two elements of the canonical basis. This property allows to trade between two fractional components of a point without diminishing the value of the relaxation. The following lemma follows from the same idea but also ensures that the points remain feasible during the trade. \begin{lemma}[Rounding]\label{lemma:rounding} For any feasible $\lambda\in[0,1]^{|\mathcal{N}|}$, there exists a feasible $\bar{\lambda}\in[0,1]^{|\mathcal{N}|}$ such that at most one of its component is fractional, that is, lies in $(0,1)$ and: \begin{displaymath} F_{\mathcal{N}}(\lambda)\leq F_{\mathcal{N}}(\bar{\lambda}) \end{displaymath} \end{lemma} \begin{proof} We give a rounding procedure which given a feasible $\lambda$ with at least two fractional components, returns some $\lambda'$ with one less fractional component, feasible such that: \begin{displaymath} F_\mathcal{N}(\lambda) \leq F_\mathcal{N}(\lambda') \end{displaymath} Applying this procedure recursively yields the lemma's result. Let us consider such a feasible $\lambda$. Let $i$ and $j$ be two fractional components of $\lambda$ and let us define the following function: \begin{displaymath} F_\lambda(\varepsilon) = F(\lambda_\varepsilon) \quad\textrm{where} \quad \lambda_\varepsilon = \lambda + \varepsilon\left(e_i-\frac{c_i}{c_j}e_j\right) \end{displaymath} It is easy to see that if $\lambda$ is feasible, then: \begin{multline}\label{eq:convex-interval} \forall\varepsilon\in\Big[\max\Big(-\lambda_i,(\lambda_j-1)\frac{c_j}{c_i}\Big), \min\Big(1-\lambda_i, \lambda_j \frac{c_j}{c_i}\Big)\Big],\;\\ \lambda_\varepsilon\;\;\textrm{is feasible} \end{multline} Furthermore, the function $F_\lambda$ is convex, indeed: \begin{align*} F_\lambda(\varepsilon) & = \mathbb{E}_{S'\sim P_{\mathcal{N}\setminus\{i,j\}}^\lambda(S')}\Big[ (\lambda_i+\varepsilon)\Big(\lambda_j-\varepsilon\frac{c_i}{c_j}\Big)V(S'\cup\{i,j\})\\ & + (\lambda_i+\varepsilon)\Big(1-\lambda_j+\varepsilon\frac{c_i}{c_j}\Big)V(S'\cup\{i\})\\ & + (1-\lambda_i-\varepsilon)\Big(\lambda_j-\varepsilon\frac{c_i}{c_j}\Big)V(S'\cup\{j\})\\ & + (1-\lambda_i-\varepsilon)\Big(1-\lambda_j+\varepsilon\frac{c_i}{c_j}\Big)V(S')\Big]\\ \end{align*} Thus, $F_\lambda$ is a degree 2 polynomial whose dominant coefficient is: \begin{multline*} \frac{c_i}{c_j}\mathbb{E}_{S'\sim P_{\mathcal{N}\setminus\{i,j\}}^\lambda(S')}\Big[ V(S'\cup\{i\})+V(S'\cup\{i\})\\ -V(S'\cup\{i,j\})-V(S')\Big] \end{multline*} which is positive by submodularity of $V$. Hence, the maximum of $F_\lambda$ over the interval given in \eqref{eq:convex-interval} is attained at one of its limit, at which either the $i$-th or $j$-th component of $\lambda_\varepsilon$ becomes integral. \end{proof} \begin{lemma}\label{lemma:relaxation-ratio} The following inequality holds: \begin{displaymath} \forall\lambda\in[0,1]^{|\mathcal{N}|},\; \frac{\log\big(1+\mu\big)}{2\mu} \,L_\mathcal{N}(\lambda)\leq F_\mathcal{N}(\lambda)\leq L_{\mathcal{N}}(\lambda) \end{displaymath} \end{lemma} \begin{proof} We will prove that: \begin{displaymath} \frac{\log\big(1+\mu\big)}{2\mu} \end{displaymath} is a lower bound of the ratio $\partial_i F_\mathcal{N}(\lambda)/\partial_i L_\mathcal{N}(\lambda)$. This will be enough to conclude, by observing that: \begin{displaymath} \frac{F_\mathcal{N}(\lambda)}{L_\mathcal{N}(\lambda)} \sim_{\lambda\rightarrow 0} \frac{\sum_{i\in \mathcal{N}}\lambda_i\partial_i F_\mathcal{N}(0)} {\sum_{i\in\mathcal{N}}\lambda_i\partial_i L_\mathcal{N}(0)} \end{displaymath} and that an interior critical point of the ratio $F_\mathcal{N}(\lambda)/L_\mathcal{N}(\lambda)$ is defined by: \begin{displaymath} \frac{F_\mathcal{N}(\lambda)}{L_\mathcal{N}(\lambda)} = \frac{\partial_i F_\mathcal{N}(\lambda)}{\partial_i L_\mathcal{N}(\lambda)} \end{displaymath} Let us start by computing the derivatives of $F_\mathcal{N}$ and $L_\mathcal{N}$ with respect to the $i$-th component. For $F$, it suffices to look at the derivative of $P_\mathcal{N}^\lambda(S)$: \begin{displaymath} \partial_i P_\mathcal{N}^\lambda(S) = \left\{ \begin{aligned} & P_{\mathcal{N}\setminus\{i\}}^\lambda(S\setminus\{i\})\;\textrm{if}\; i\in S \\ & - P_{\mathcal{N}\setminus\{i\}}^\lambda(S)\;\textrm{if}\; i\in \mathcal{N}\setminus S \\ \end{aligned}\right. \end{displaymath} Hence: \begin{multline*} \partial_i F_\mathcal{N} = \sum_{\substack{S\subseteq\mathcal{N}\\ i\in S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S\setminus\{i\})V(S)\\ - \sum_{\substack{S\subseteq\mathcal{N}\\ i\in \mathcal{N}\setminus S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S)V(S)\\ \end{multline*} Now, using that every $S$ such that $i\in S$ can be uniquely written as $S'\cup\{i\}$, we can write: \begin{multline*} \partial_i F_\mathcal{N} = \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S)V(S\cup\{i\})\\ - \sum_{\substack{S\subseteq\mathcal{N}\\ i\in \mathcal{N}\setminus S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S)V(S)\\ \end{multline*} Finally, by using the expression for the marginal contribution of $i$ to $S$: \begin{displaymath} \partial_i F_\mathcal{N}(\lambda) = \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_{\mathcal{N}\setminus\{i\}}^\lambda(S) \log\Big(1 + \mu \T{x_i}A(S)^{-1}x_i\Big) \end{displaymath} The computation of the derivative of $L_\mathcal{N}$ uses standard matrix calculus and gives: \begin{displaymath} \partial_i L_\mathcal{N}(\lambda) = \mu \T{x_i}\tilde{A}(\lambda)^{-1}x_i \end{displaymath} Using the following inequalities: \begin{gather*} \forall S\subseteq\mathcal{N}\setminus\{i\},\quad P_{\mathcal{N}\setminus\{i\}}^\lambda(S)\geq P_{\mathcal{N}\setminus\{i\}}^\lambda(S\cup\{i\})\\ \forall S\subseteq\mathcal{N},\quad P_{\mathcal{N}\setminus\{i\}}^\lambda(S) \geq P_\mathcal{N}^\lambda(S)\\ \forall S\subseteq\mathcal{N},\quad A(S)^{-1} \geq A(S\cup\{i\})^{-1}\\ \end{gather*} we get: \begin{align*} \partial_i F_\mathcal{N}(\lambda) & \geq \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_\mathcal{N}^\lambda(S) \log\Big(1 + \mu \T{x_i}A(S)^{-1}x_i\Big)\\ & \geq \frac{1}{2} \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_\mathcal{N}^\lambda(S) \log\Big(1 + \mu \T{x_i}A(S)^{-1}x_i\Big)\\ &\hspace{-3.5em}+\frac{1}{2} \sum_{\substack{S\subseteq\mathcal{N}\\ i\in\mathcal{N}\setminus S}} P_\mathcal{N}^\lambda(S\cup\{i\}) \log\Big(1 + \mu \T{x_i}A(S\cup\{i\})^{-1}x_i\Big)\\ &\geq \frac{1}{2} \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) \log\Big(1 + \mu \T{x_i}A(S)^{-1}x_i\Big)\\ \end{align*} Using that $A(S)\geq I_d$ we get that: \begin{displaymath} \mu \T{x_i}A(S)^{-1}x_i \leq \mu \end{displaymath} Moreover: \begin{displaymath} \forall x\leq\mu,\; \log(1+x)\geq \frac{\log\big(1+\mu\big)}{\mu} x \end{displaymath} Hence: \begin{displaymath} \partial_i F_\mathcal{N}(\lambda) \geq \frac{\log\big(1+\mu\big)}{2\mu} \T{x_i}\bigg(\sum_{S\subseteq\mathcal{N}}P_\mathcal{N}^\lambda(S)A(S)^{-1}\bigg)x_i \end{displaymath} Finally, using that the inverse is a matrix convex function over symmetric positive definite matrices: \begin{align*} \partial_i F_\mathcal{N}(\lambda) &\geq \frac{\log\big(1+\mu\big)}{2\mu} \T{x_i}\bigg(\sum_{S\subseteq\mathcal{N}}P_\mathcal{N}^\lambda(S)A(S)\bigg)^{-1}x_i\\ & \geq \frac{\log\big(1+\mu\big)}{2\mu} \partial_i L_\mathcal{N}(\lambda) \end{align*} \end{proof} We can now prove lemma~\ref{lemma:relaxation} from previous section. \begin{proof} Let us consider a feasible point $\lambda^*\in[0,1]^{|\mathcal{N}|}$ such that $L_\mathcal{N}(\lambda^*) = OPT(L_\mathcal{N}, B)$. By applying lemma~\ref{lemma:relaxation-ratio} and lemma~\ref{lemma:rounding} we get a feasible point $\bar{\lambda}$ with at most one fractional component such that: \begin{equation}\label{eq:e1} L_\mathcal{N}(\lambda^*) \leq \frac{1}{C_\mu} F_\mathcal{N}(\bar{\lambda}) \end{equation} Let $\lambda_i$ denote the fractional component of $\bar{\lambda}$ and $S$ denote the set whose indicator vector is $\bar{\lambda} - \lambda_i e_i$. Using the fact that $F_\mathcal{N}$ is linear with respect to the $i$-th component and is a relaxation of the value function, we get: \begin{displaymath} F_\mathcal{N}(\bar{\lambda}) = V(S) +\lambda_i V(S\cup\{i\}) \end{displaymath} Using the submodularity of $V$: \begin{displaymath} F_\mathcal{N}(\bar{\lambda}) \leq 2 V(S) + V(i) \end{displaymath} Note that since $\bar{\lambda}$ is feasible, $S$ is also feasible and $V(S)\leq OPT(V,\mathcal{N}, B)$. Hence: \begin{equation}\label{eq:e2} F_\mathcal{N}(\bar{\lambda}) \leq 2 OPT(V,\mathcal{N}, B) + \max_{i\in\mathcal{N}} V(i) \end{equation} Putting \eqref{eq:e1} and \eqref{eq:e2} together gives the results. \end{proof}