diff options
| author | Thibaut Horel <thibaut.horel@gmail.com> | 2012-11-04 22:44:51 +0100 |
|---|---|---|
| committer | Thibaut Horel <thibaut.horel@gmail.com> | 2012-11-04 22:44:51 +0100 |
| commit | cf952f7a9cf431b7f49fd8150a183295e8f894c1 (patch) | |
| tree | 586f9291ab9f93aa32428b934d21c77901f6e698 | |
| parent | 13222e35de229c969419ffd7e47d0080810b56f7 (diff) | |
| parent | fe1eddfacaec77b0038abdb899d3f5a56352e8b4 (diff) | |
| download | recommendation-cf952f7a9cf431b7f49fd8150a183295e8f894c1.tar.gz | |
Merge branch 'master' of ssh://74.95.195.229:1444/git/data_value
Conflicts:
general.tex
| -rw-r--r-- | general.tex | 15 | ||||
| -rw-r--r-- | main.tex | 218 | ||||
| -rw-r--r-- | problem.tex | 2 |
3 files changed, 116 insertions, 119 deletions
diff --git a/general.tex b/general.tex index 11c471f..4f9aaad 100644 --- a/general.tex +++ b/general.tex @@ -64,19 +64,8 @@ Selecting experiments that maximize the information gain in the Bayesian setup l where $h\in \mathcal{H}$ for some subset $\mathcal{H}$ of all possible mappings $h:\Omega\to\reals$, called the \emph{hypothesis space}, and $\varepsilon_i$ are random variables in $\reals$, not necessarily identically distributed, that are independent \emph{conditioned on $h$}. This model is quite broad, and captures many learning tasks, such as: \begin{enumerate} \item\textbf{Generalized Linear Regression.} $\Omega=\reals^d$, $\mathcal{H}$ is the set of linear maps $\{h(x) = \T{\beta}x \text{ s.t. } \beta\in \reals^d\}$, and $\varepsilon_i$ are independent zero-mean normal variables, where $\expt{\varepsilon_i^2}=\sigma_i$. -\item\textbf{Logistic Regression.} $\Omega=\reals^d$, $\mathcal{H}$ is the set of maps $\{h(x) = \frac{e^{\T{\beta} x}}{1+e^{\T{\beta} x}} \text{ s.t. } \beta\in\reals^d\}$, and $\varepsilon_i$ are independent conditioned on $h$ such that -\begin{displaymath} -\varepsilon_i=\begin{cases} -1- h(x),& \text{w.~prob.}\;h(x)\\ --h(x),& \text{w.~prob.}\;1-h(x) -\end{cases}\end{displaymath} - -\item\textbf{Learning Binary Functions with Bernoulli Noise.} $\Omega = \{0,1\}^d$, and $\mathcal{H}$ is some subset of $2^{\Omega}$, and -\begin{displaymath} -\varepsilon_i =\begin{cases} -0, &\text{w.~prob.}\;p\\ -\bar{h}(x)-h(x), &\text{w.~prob.}\;1-p -\end{cases}\end{displaymath} +\item\textbf{Logistic Regression.} $\Omega=\reals^d$, $\mathcal{H}$ is the set of maps $\{h(x) = \frac{e^{\T{\beta} x}}{1+e^{\T{\beta} x}} \text{ s.t. } \beta\in\reals^d\}$, and $\varepsilon_i$ are independent conditioned on $h$ such that $$\varepsilon_i=\begin{cases} 1- h(x_i),& \text{w.~prob.}~h(x_i)\\-h(x_i),&\text{w.~prob.}~1-h(x_i)\end{cases}$$ +\item\textbf{Learning Binary Functions with Bernoulli Noise.} $\Omega = \{0,1\}^d$, and $\mathcal{H}$ is some subset of $2^{\Omega\times\{0,1\}}$, and $$\varepsilon_i =\begin{cases}0, &\text{w.~prob.}~p\\\bar{h}(x_i)-h(x_i), \text{w.~prob.}~1-p\end{cases}$$ \end{enumerate} In this setup, assume that the experimenter has a prior distribution on the hypothesis $h\in \mathcal{H}$. Then, the information gain objective can be written again as the mutual information between $\beta$ and $y_S$. @@ -3,29 +3,29 @@ In this section we present a mechanism for \EDP. Previous works on maximizing submodular functions \cite{nemhauser, sviridenko-submodular} and designing -auction mechanisms for submodular utility functions \cite{singer-mechanisms, -chen, singer-influence} rely on a greedy heuristic. In this heuristic, elements -are added to the solution set according to the following greedy selection rule: -assume that you have already selected a set $S$, then the next element to be -included in the solution set is: -\begin{displaymath} - i = \argmax_{j\in\mathcal{N}\setminus S}\frac{V(S\cup\{i\}) - V(S)}{c_i} -\end{displaymath} -This is the generalization of the \emph{value-per-cost} ratio used in greedy -heuristic for knapsack problems. However, note that for general submodular -functions, the value of an element depends on the set to which it is added. +auction mechanisms for submodular value functions \cite{singer-mechanisms, +chen, singer-influence} rely on a greedy algorithm. In this algorithm, elements +are added to the solution set according to the following greedy selection rule. +Assume that $S\subseteq \mathcal{N}$ is the solution set constructed thus far; then, the next element $i$ to be +included in $S$ is the one with the highest \emph{marginal-value-per-cost}: +\begin{align} + i = \argmax_{j\in\mathcal{N}\setminus S}\frac{V(S\cup\{i\}) - V(S)}{c_i}\label{greedy} +\end{align} +This process terminates when no more items can be added to $S$ using \eqref{greedy} under budget $B$. This is the generalization of the \emph{value-per-cost} ratio used in the greedy approximation +algorithm for \textsc{Knapsack}. However, in contrast to \textsc{Knapsack}, for general submodular +functions, the marginal value of an element in \eqref{greedy} depends on the set to which it the element is added. -Unfortunately, even for the non-strategic case, the greedy heuristic gives an -unbounded approximation ratio. Let us introduce $i^* -= \argmax_{i\in\mathcal{N}} V(i)$, the element of maximum value (as a singleton -set). It has been noted by Khuller et al. \cite{khuller} that for the maximum +Unfortunately, even for the full information case, the greedy algorithm gives an +unbounded approximation ratio. Let $i^* += \argmax_{i\in\mathcal{N}} V(i)$ be the element of maximum value (as a singleton +set). It has been noted by Khuller \emph{et al.}~\cite{khuller} that for the maximum coverage problem, taking the maximum between the greedy solution and $V(i^*$) gives a $\frac{2e}{e-1}$ approximation ratio. In the general case, we have the following result from Singer \cite{singer-influence} which follows from -Chen et al. \cite{chen}: +Chen \emph{et al.}~\cite{chen}: \stratis{Is it Singer or Chen? Also, we need to introduce $V(i)$ somewhere...} \begin{lemma}[Singer \cite{singer-influence}]\label{lemma:greedy-bound} -Let $S_G$ be the set computed by the greedy heuristic and let us define $i^*$: +Let $S_G$ be the set computed by the greedy algorithm and define $i^*$: \begin{displaymath} i^* = \argmax_{i\in\mathcal{N}} V(i) \end{displaymath} @@ -38,71 +38,80 @@ OPT(V,\mathcal{N},B) \leq \frac{e}{e-1}\big( 3 V(S_G) + 2 V(i^*)\big) Hence, taking the maximum between $V(S_G)$ and $V(i^*)$ yields an approximation ratio of $\frac{5e}{e-1}$. However, Singer \cite{singer-influence} notes that this approach breaks incentive compatibility and therefore cannot be directly -applied to the strategic case. Indeed, assume that we are in a situation where -the mechanism has allocated to the greedy set ($V(S_G) \geq V(i^*)$). If an -agent $i$ from the greedy set reduces her cost, it could happen that $V(S_G)$ -becomes smaller than $V(i^*)$. In this case the mechanism will allocate to -$i^*$ and $i$ will be out of the allocated set. This breaks the monotonicity of -the allocation function. +applied to the strategic case. Indeed, suppose this allocation mechanism is used, and consider a case where +the allocates the greedy set ($V(S_G) \geq V(i^*)$). If an +agent $i$ from $S_G$ reduces her cost, it could happen that $V(S_G)$ +becomes smaller than $V(i^*)$. In this case the mechanism's allocation becomes +$i^*$, and $i$ is excluded from the allocated set. This violates the monotonicity of +the allocation function and hence also truthfulness, by Myerson's Theorem. -The way this issue has been addressed thus far \cite{chen, singer-influence}, -is to introduce a third quantity: if $V(i^*)$ is larger than this quantity, the +To address this, Chen \emph{et al.}~\cite{chen} %and Singer~\cite{singer-influence}, + introduce a third quantity: if $V(i^*)$ is larger than this quantity, the mechanism allocates to $i^*$, otherwise it allocates to the greedy set $S_G$. This quantity must be provably close to $V(S_G)$, to keep a bounded approximation ratio, while maintaining the monotonicity of the allocation algorithm. Furthermore, it must be computable in polynomial time to keep an -overall polynomial complexity for the allocation algorithm. If the underlying -non-strategic optimization program \eqref{eq:non-strategic} can be solved in -polynomial time, Chen et al. \cite{chen} prove that allocating to $i^*$ when -$V(i^*) \geq C\cdot OPT(V,\mathcal{N}\setminus\{i^*\}$ (for some constant $C$) -and to $S_G$ otherwise yields a 8.34 approximation mechanism. For specific -problems, Chen et al. \cite{chen} for knapsack on one hand, Singer -\cite{singer-influence} for maximum coverage on the other hand, instead compare -$V(i^*)$ to $OPT(R_{\mathcal{N}\setminus\{i\}}, B)$, where $R_\mathcal{N}$ -denotes a fractional relaxation of the function $V$ over the set -$\mathcal{N}$. We say that $R_\mathcal{N}$ is a fractional relaxation of $V$ -over the set $\mathcal{N}$, if (a) $R_\mathcal{N}$ is a function defined on the -hypercube $[0, 1]^{|\mathcal{N}|}$ and (b) for all $S\subseteq\mathcal{N}$, if -$\mathbf{1}_S$ denotes the indicator vector of $S$ we have -$R_\mathcal{N}(\mathbf{1}_S) = V(S)$. The optimization program -\eqref{eq:non-strategic} extends naturally to relaxations: -\begin{displaymath} - OPT(R_\mathcal{N}, B) = \argmax_{\lambda\in[0, 1]^{|\mathcal{N}|}} - \left\{R_\mathcal{N}(\lambda) \mid \sum_{i=1}^{|\mathcal{N}|} \lambda_i c_i - \leq B\right\} -\end{displaymath} +overall polynomial complexity for the allocation algorithm. + +\sloppy +When the underlying +full information problem \eqref{eq:non-strategic} can be solved in +polynomial time, Chen et al. \cite{chen} prove that $OPT(V,\mathcal{N}\setminus\{i^*\},B)$ can play the role of this quantity: that is, allocating to $i^*$ when +$V(i^*) \geq C\cdot OPT(V,\mathcal{N}\setminus\{i^*\},B)$ (for some constant $C$) +and to $S_G$ otherwise yields a 8.34-approximation mechanism. However, this is not a poly-time mechanism when the underlying problem is NP hard (unless P=NP), as is the case for \EDP. + +\fussy +For NP-hard +problems, Chen et al.~\cite{chen} %for \textsc{Knapsack} and Singer +%\cite{singer-influence} for \textsc{Coverage} instead +propose comparing +$V(i^*)$ to %$OPT(R_{\mathcal{N}\setminus\{i\}}, B)$, where $R_\mathcal{N}$ denotes +the optimal value of a \emph{fractional relaxation} of the function $V$ over the set +$\mathcal{N}$. A fuction $R_\mathcal{N}:[0, 1]^{n}\to\reals_+$ defined on the hypercube $[0, 1]^{n}$ is a fractional relaxation of $V$ +over the set $\mathcal{N}$ if %(a) $R_\mathcal{N}$ is a function defined on the hypercube $[0, 1]^{n}$ and (b) + $R_\mathcal{N}(\mathbf{1}_S) = V(S)$ for all $S\subseteq\mathcal{N}$, where +$\mathbf{1}_S$ denotes the indicator vector of $S$. The optimization program +\eqref{eq:non-strategic} extends naturally to such relaxations: +\begin{align} + OPT(R_\mathcal{N}, B) = \argmax_{\lambda\in[0, 1]^{n}} + \left\{R_\mathcal{N}(\lambda) \mid \sum_{i=1}^{n} \lambda_i c_i + \leq B\right\}\label{relax} +\end{align} +To attain truthful constant approximation mechanism for \textsc{Knapsack}, Chen \emph{et al.}~\cite{chen} substitute $OPT(R_\mathcal{N},B)$ for $OPT(V,\mathcal{N},B)$ in the above program, where $R_{\mathcal{N}}$ are relaxations of the \textsc{Knapsack} objectives. +Similarly, Singer~\cite{singer-influence} follows the same approach to obtain a mechanism for \textsc{Coverage}. A relaxation which is commonly used, due to its well-behaved properties in the context of maximizing submodular functions, is the \emph{multi-linear} extension: \begin{equation}\label{eq:multilinear} - F_\mathcal{N}^V(\lambda) + F_\mathcal{N}(\lambda) = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[V(S)\right] = \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) V(S) \end{equation} $P_\mathcal{N}^\lambda(S)$ is the probability of choosing the set $S$ if -we decide for each element in $\mathcal{N}$ to pick it with probability -$\lambda_i$ or to reject it with probability $1-\lambda_i$: +we select each element $i$ in $\mathcal{N}$ independently with probability +$\lambda_i$: %or to reject it with probability $1-\lambda_i$: \begin{displaymath} P_\mathcal{N}^\lambda(S) = \prod_{i\in S} \lambda_i - \prod_{i\in\mathcal{N}\setminus S} 1 - \lambda_i + \prod_{i\in\mathcal{N}\setminus S}( 1 - \lambda_i) \end{displaymath} -For knapsack, Chen et al. \cite{chen} directly use the multi-linear extension as -the relaxation to compare $V(i^*)$ to. For maximum coverage however, the +For \textsc{Knapsack}, Chen et al. \cite{chen} directly use the multi-linear extension, as in this case \eqref{relax} can be solved in polynomial time. % for the \textsc{Knapsack} objective. + For \textsc{Coverage} however the optimal value of the multi-linear extension cannot be computed in polynomial time. Thus, Singer \cite{singer-influence} introduces a second relaxation of -the value function which can be proven to be close to the multi-linear -extension, by using the \emph{pipage rounding} method from Ageev and Sviridenko +the value function. The latter is proved to be close to the multi-linear +extension using the \emph{pipage rounding} method of Ageev and Sviridenko \cite{pipage}. -Here, following these ideas, we introduce another relaxation of the value -function. Note that in our case, the multi-linear extension can be written: +Here, following these ideas, we introduce a relaxation specifically tailored to the value +function of \EDP. The multi-linear extension of \eqref{modified} can be written: \begin{displaymath} F_\mathcal{N}(\lambda) = \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[\log\det A(S) \right] \;\text{with}\; A(S) = I_d + \sum_{i\in S} x_i\T{x_i} \end{displaymath} -Our relaxation follows naturally by swapping the expectation and the $\log\det$ +As in the case of \textsc{Coverage}, \eqref{relax} is not a convex optimization problem, and is not easy to solve directly. %As in ~\cite{singer-influence}, +We thus introduce an additional relaxation $L_{\mathcal{N}}$. Our relaxation follows naturally by swapping the expectation and the $\log\det$ in the above formula: \begin{align}\label{eq:concave} L_\mathcal{N}(\lambda) & \defeq \log\det\left(\mathbb{E}_{S\sim @@ -110,25 +119,25 @@ in the above formula: &= \log\det\left(I_d + \sum_{i\in\mathcal{N}} \lambda_i x_i\T{x_i}\right) \end{align} -This function is well-known to be concave and even self-concordant (see \emph{e.g.} +This function is well-known to be concave and even self-concordant (see \emph{e.g.}, \cite{boyd2004convex}). In this case, the analysis of Newton's method for self-concordant functions in \cite{boyd2004convex}, shows that it is possible -to find the maxmimum of $L_\mathcal{N}$ to any precision $\varepsilon$ in +to find the maximum of $L_\mathcal{N}$ to any precision $\varepsilon$ in a number of iterations $O(\log\log\varepsilon^{-1})$. The main challenge will be to prove that $OPT(L_\mathcal{N}, B)$ is close to $V(S_G)$. To do so, our main technical contribution is to prove that $L_\mathcal{N}$ has a bounded -approximation ratio to the value function $V$ (lemma~\ref{lemma:relaxation}). +approximation ratio to the value function $V$ (Lemma~\ref{lemma:relaxation}). -The mechanism for \EDP{} is presented in algorithm \ref{mechanism}. +The resulting mechanism for \EDP{} is presented in Algorithm~\ref{mechanism}. The constant $C$ is an absolute constant, that we determine in Section~\ref{...} (See \eqref{...}). \begin{algorithm} \caption{Mechanism for \EDP{}}\label{mechanism} \begin{algorithmic}[1] \State $i^* \gets \argmax_{j\in\mathcal{N}}V(j)$ - \State $x^* \gets \argmax_{x\in[0,1]^{|\mathcal{N}|}} \{L_{\mathcal{N}\setminus\{i^*\}}(x) - \mid c(x)\leq B\}$ + \State $x^* \gets \argmax_{x\in[0,1]^{n}} \{L_{\mathcal{N}\setminus\{i^*\}}(x) + \mid c(x)\leq B\}$, with precision $\epsilon>0$ \Statex - \If{$L(x^*) < CV(i^*)$} - \State \textbf{return} $\{i^*\}$ + \If{$L_{\mathcal{N}\setminus\{i^*\}}(x^*) < C \cdot V(i^*)$} \label{c} + \State \textbf{return} $\{i^*\}$ \Else \State $i \gets \argmax_{1\leq j\leq n}\frac{V(j)}{c_j}$ \State $S_G \gets \emptyset$ @@ -142,11 +151,11 @@ The mechanism for \EDP{} is presented in algorithm \ref{mechanism}. \end{algorithmic} \end{algorithm} -We can now state the main result of this section: +We can now state our main result: \begin{theorem}\label{thm:main} - The mechanism described in algorithm~\ref{mechanism} is truthful, + There exists an absolute constant $C$ such that the mechanism described in Algorithm~\ref{mechanism} is truthful, individiually rational and budget feasible. Furthermore, for any $\varepsilon>0$, the mechanism - has a complexity $O\left(\text{poly}(|\mathcal{N}|, d, + has complexity $O\left(\text{poly}(n, d, \log\log \varepsilon^{-1})\right)$ and returns a set $S^*$ such that: \begin{align*} OPT(V,\mathcal{N}, B) @@ -157,7 +166,7 @@ We can now state the main result of this section: \end{theorem} %\stratis{Add lowerbound here too.} - +Note that this implies we construct a poly-time mechanism with accuracy arbitrarily close to 19.68, by taking $\varepsilon = \ldots$\stratis{fix me}. In addition, we prove the following lower bound. \begin{theorem}\label{thm:lowerbound} There is no $2$-approximate, truthful, budget feasible, individionally rational mechanism for EDP. @@ -167,10 +176,13 @@ There is no $2$-approximate, truthful, budget feasible, individionally rational Suppose, for contradiction, that such a mechanism exists. Consider two experiments with dimension $d=2$, such that $x_1 = e_1=[1 ,0]$, $x_2=e_2=[0,1]$ and $c_1=c_2=B/2+\epsilon$. Then, one of the two experiments, say, $x_1$, must be in the set selected by the mechanism, otherwise the ratio is unbounded, a contradiction. If $x_1$ lowers its value to $B/2-\epsilon$, by monotonicity it remains in the solution; by threshold payment, it is paid at least $B/2+\epsilon$. So $x_2$ is not included in the solution by budget feasibility and individual rationality: hence, the selected set attains a value $\log2$, while the optimal value is $2\log 2$. \end{proof} -\subsection{Proof of theorem~\ref{thm:main}} +\subsection{Proof of Theorem~\ref{thm:main}} +\stratis{individual rationality???} -The proof of the properties of the mechanism is broken down into lemmas. -Monotonicity and budget feasibility follows from the analysis of Chen et al. + +%The proof of the properties of the mechanism is broken down into lemmas. +We now present the proof of Theorem~\ref{thm:main}. +Monotonicity and budget feasibility follows from the analysis of Chen \emph{et al.}; we briefly restate the proofs for the sake of completeness. \cite{chen}. The proof of the approximation ratio is done in lemma~\ref{lemma:approx} and uses a bound on our concave relaxation $L_\mathcal{N}$ (lemma~\ref{lemma:relaxation}) which is our main technical @@ -182,34 +194,30 @@ The mechanism is monotone. \end{lemma} \begin{proof} - Suppose, for contradiction, that there exists an agent $i$ that has been - selected by the mechanism and that would not have been selected had she - reported a cost $c_i'\leq c_i$ (all the other costs staying the same). - - If $i\neq i^*$ and $i$ has been selected, then we are in the case where - $L_\mathcal{N}(x^*) \geq C V(i^*)$ and $i$ was included in the result set - by the greedy part of the mechanism. By reporting a cost $c_i'\leq c_i$, - using the submodularity of $V$, we see that $i$ will satisfy the greedy - selection rule: - \begin{displaymath} - i = \argmax_{j\in\mathcal{N}\setminus S} \frac{V(S\cup\{j\}) - - V(S)}{c_j} - \end{displaymath} - in an earlier iteration of the greedy heuristic. Let us denote by $S_i$ + Consider an agent $i$ with cost $c_i$ that is + selected by the mechanism, and suppose that she reports + a cost $c_i'\leq c_i$, all the other costs staying the same. + Suppose that when $i$ reports $c_i$, $L_{\mathcal{N}\setminus\{i^*\}}(x^*) \geq C V(i^*)$; as $s_i(c_i,c_{-i})$, $i\in S_G$. + By reporting a cost $c_i'\leq c_i$, $i$ may be selected at an earlier iteration of the greedy algorithm. + %using the submodularity of $V$, we see that $i$ will satisfy the greedy + %selection rule: + %\begin{displaymath} + % i = \argmax_{j\in\mathcal{N}\setminus S} \frac{V(S\cup\{j\}) + % - V(S)}{c_j} + %\end{displaymath} + %in an earlier iteration of the greedy heuristic. + Denote by $S_i$ (resp. $S_i'$) the set to which $i$ is added when reporting cost $c_i$ - (resp. $c_i'$). We have $S_i'\subseteq S_i$. Moreover: + (resp. $c_i'$). We have $S_i'\subseteq S_i$; in addition, $S_i'\subseteq S_G'$, the set selected greedily under $(c_i',c_{-i})$; if not, then greedy selection would terminate prior to selecting $i$ also when she reports $c_i$, a contradiction. Moreover, we have \begin{align*} c_i' & \leq c_i \leq \frac{B}{2}\frac{V(S_i\cup\{i\})-V(S_i)}{V(S_i\cup\{i\})}\\ & \leq \frac{B}{2}\frac{V(S_i'\cup\{i\})-V(S_i')}{V(S_i'\cup\{i\})} \end{align*} - Hence $i$ will still be included in the result set. - - If $i = i^*$, $i$ is included iff $L_\mathcal{N}(x^*) \leq C V(i^*)$. - Reporting $c_i'$ instead of $c_i$ does not change the value $V(i^*)$ nor - $L_\mathcal{N}(x^*)$ (which is computed over - $\mathcal{N}\setminus\{i^*\}$). Thus $i$ is still included by reporting - a different cost. + by the monotonicity and submodularity of $V$. Hence $i\in S_G'$. Moreover, as $L_{\mathcal{N}\setminus \{i^*\}}(x^*)$ is the optimal value of \eqref{relax} under relaxation $L_{\mathcal{N}}$, reducing the costs can only increase this value, so under $c'_i\leq c_i$ the greedy set is still allocated and $s_i(c_i',c_{-i}) =1$. + Suppose now that when $i$ reports $c_i$, $L_{\mathcal{N}\setminus \{i^*\}}(x^*) < C V(i^*)$. Then $s_i(c_i,c_{-1})=1$ iff $i = i^*$. + Reporting $c_{i^*}'\leq c_{i^*}$ does not change $V(i^*)$ nor + $L_{\mathcal{N}\setminus \{i^*\}}(x^*) \leq C V(i^*)$; thus $s_{i^*}(c_{i^*}',c_{-i^*})=1$. \end{proof} \begin{lemma}\label{lemma:budget-feasibility} @@ -244,21 +252,21 @@ Hence, the total payment is bounded by: \begin{lemma}\label{lemma:complexity} For any $\varepsilon > 0$, the complexity of algorithm~\ref{mechanism} is - $O(\text{poly}(|\mathcal{N}|, d, \log\log \varepsilon^{-1}))$. + $O(\text{poly}(n, d, \log\log \varepsilon^{-1}))$. \end{lemma} \begin{proof} The value function $V$ \eqref{modified} can be computed in time - $O(\text{poly}(|\mathcal{N}|, d))$ and the mechanism only involves a linear + $O(\text{poly}(n, d))$ and the mechanism only involves a linear number of queries to the function $V$. The function $\log\det$ is concave and self-concordant (see \cite{boyd2004convex}), so for any $\varepsilon$, its maximum can be find to a precision $\varepsilon$ in a number of iterations of Newton's method $O(\log\log\varepsilon^{-1})$. Each iteration of Newton's method can be - done in time $O(\text{poly}(|\mathcal{N}|, d))$. Thus, line 2 of + done in time $O(\text{poly}(n, d))$. Thus, line 2 of algorithm~\ref{mechanism}, can be computed in time - $O(\text{poly}(|\mathcal{N}|, d, \log\log \varepsilon^{-1}))$. + $O(\text{poly}(n, d, \log\log \varepsilon^{-1}))$. \end{proof} Finally, we prove the approximation ratio of the mechanism. We use the @@ -379,7 +387,7 @@ referred to in the literature as \emph{cross-convexity} (see \emph{e.g.} - e_j)\big) \end{displaymath} where $e_i$ and $e_j$ are two vectors of the standard basis of -$\reals^{|\mathcal{N}|}$, then $\tilde{F}$ is convex. Hence its maximum over the interval: +$\reals^{n}$, then $\tilde{F}$ is convex. Hence its maximum over the interval: \begin{displaymath} I_\lambda = \Big[\max(-\lambda_i,\lambda_j-1), \min(1-\lambda_i, \lambda_j)\Big] \end{displaymath} @@ -390,11 +398,11 @@ The lemma below proves that we can similarly trade one fractional component for an other until one of them becomes integral \emph{while maintaining the feasibility of the point at which $F_\mathcal{N}$ is evaluated}. Here, by feasibility of a point $\lambda$, we mean that it satisfies the budget constraint $\sum_{1\leq -i\leq |\mathcal{N}|} \lambda_i c_i \leq B$. +i\leq n} \lambda_i c_i \leq B$. \begin{lemma}[Rounding]\label{lemma:rounding} - For any feasible $\lambda\in[0,1]^{|\mathcal{N}|}$, there exists a feasible - $\bar{\lambda}\in[0,1]^{|\mathcal{N}|}$ such that at most one of its component is + For any feasible $\lambda\in[0,1]^{n}$, there exists a feasible + $\bar{\lambda}\in[0,1]^{n}$ such that at most one of its component is fractional, that is, lies in $(0,1)$ and: \begin{displaymath} F_{\mathcal{N}}(\lambda)\leq F_{\mathcal{N}}(\bar{\lambda}) @@ -451,7 +459,7 @@ i\leq |\mathcal{N}|} \lambda_i c_i \leq B$. \begin{lemma}\label{lemma:relaxation-ratio} The following inequality holds: \begin{displaymath} - \forall\lambda\in[0,1]^{|\mathcal{N}|},\; + \forall\lambda\in[0,1]^{n},\; \frac{1}{2} \,L_\mathcal{N}(\lambda)\leq F_\mathcal{N}(\lambda)\leq L_{\mathcal{N}}(\lambda) @@ -483,10 +491,10 @@ i\leq |\mathcal{N}|} \lambda_i c_i \leq B$. Finally, if the minimum is attained on a face of the hypercube (a face is defined as a subset of the hypercube where one of the variable is fixed to 0 or 1), without loss of generality, we can assume that the minimum is - attained on the face where the $|\mathcal{N}|$-th variable has been fixed + attained on the face where the $n$-th variable has been fixed to 0 or 1. Then, either the minimum is attained at a point interior to the face or on a boundary of the face. In the first case, relation - \eqref{eq:lhopital} still characterizes the minimum for $i< |\mathcal{N}|$. + \eqref{eq:lhopital} still characterizes the minimum for $i< n$. In the second case, by repeating the argument again by induction, we see that all is left to do is to show that the bound holds for the vertices of the cube (the faces of dimension 1). The vertices are exactly the binary @@ -598,7 +606,7 @@ i\leq |\mathcal{N}|} \lambda_i c_i \leq B$. \end{proof} \begin{proof} - Let us consider a feasible point $\lambda^*\in[0,1]^{|\mathcal{N}|}$ such that $L_\mathcal{N}(\lambda^*) + Let us consider a feasible point $\lambda^*\in[0,1]^{n}$ such that $L_\mathcal{N}(\lambda^*) = OPT(L_\mathcal{N}, B)$. By applying lemma~\ref{lemma:relaxation-ratio} and lemma~\ref{lemma:rounding} we get a feasible point $\bar{\lambda}$ with at most one fractional component such that: diff --git a/problem.tex b/problem.tex index 363731a..e3ee84b 100644 --- a/problem.tex +++ b/problem.tex @@ -85,7 +85,7 @@ A mechanism is truthful iff every $i \in \mathcal{N}$ and every two cost vector OPT(V,\mathcal{N}, B) \leq \alpha V(S). \end{displaymath} The approximation ratio captures the \emph{price of truthfulness}, \emph{i.e.}, the relative value loss incurred by adding the truthfulness constraint. % to the value function maximization. - \item \emph{Computationally efficient.} The allocation and payment function + \item \emph{Computational efficiency.} The allocation and payment function should be computable in polynomial time in the number of agents $n$. %\thibaut{Should we say something about the black-box model for $V$? Needed to say something in general, but not in our case where the value function can be computed in polynomial time}. \end{itemize} |
