summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--intro.tex3
-rw-r--r--main.tex41
2 files changed, 15 insertions, 29 deletions
diff --git a/intro.tex b/intro.tex
index ba46534..0b5b70f 100644
--- a/intro.tex
+++ b/intro.tex
@@ -63,3 +63,6 @@ $O(\log n)$-competitive algorithm. In the bidding model $O(1)$-competitive
algorithm.
\stratis{What is known about the maximization of logdet in the non-strategic case? Is it NP hard? Approximation ratios?}
+\thibaut{Knapsack reduces to our problem in dimension 1, hence maximizing log
+det is NP-hard. The approximation ratio is at least (1-1/e) by applying the
+general approximation algorithm from Sviridenko \cite{sviridenko-submodular}.}
diff --git a/main.tex b/main.tex
index e2b8335..178ad1f 100644
--- a/main.tex
+++ b/main.tex
@@ -62,10 +62,10 @@ the allocation function.
The way this issue has been addressed thus far \cite{chen, singer-influence},
is to introduce a third quantity: if $V(i^*)$ is larger than this quantity, the
mechanism allocates to $i^*$, otherwise it allocates to the greedy set $S_G$.
-To keep a bounded approximation ratio, this quantity must be provably close to
-$V(S_G)$ while maintaining the monotonicity of the allocation algorithm.
-Furthermore, it must be computable in polynomial time to keep an overall
-polynomial complexity for the allocation algorithm. If the underlying
+This quantity must be provably close to $V(S_G)$, to keep a bounded
+approximation ratio, while maintaining the monotonicity of the allocation
+algorithm. Furthermore, it must be computable in polynomial time to keep an
+overall polynomial complexity for the allocation algorithm. If the underlying
non-strategic optimization program \eqref{eq:non-strategic} can be solved in
polynomial time, Chen et al. \cite{chen} prove that allocating to $i^*$ when
$V(i^*) \geq C\cdot OPT(V,\mathcal{N}\setminus\{i^*\}$ (for some constant $C$)
@@ -73,26 +73,25 @@ and to $S_G$ otherwise yields a 8.34 approximation mechanism. For specific
problems, Chen et al. \cite{chen} for knapsack on one hand, Singer
\cite{singer-influence} for maximum coverage on the other hand, instead compare
$V(i^*)$ to $OPT(R_{\mathcal{N}\setminus\{i\}}, B)$, where $R_\mathcal{N}$
-denotes a fractional relaxation of the function $V$ over the set
-$\mathcal{N}$.
+denotes a fractional relaxation of the function $V$ over the set $\mathcal{N}$.
We say that $R_\mathcal{N}$ is a fractional relaxation of $V$ over the set
$\mathcal{N}$, if (a) $R_\mathcal{N}$ is a function defined on the hypercube
$[0, 1]^{|\mathcal{N}|}$ and (b) for all $S\subseteq\mathcal{N}$, if
$\mathbf{1}_S$ denotes the indicator vector of $S$ we have
$R_\mathcal{N}(\mathbf{1}_S) = V(S)$. A relaxation which is commonly used, due
-to its nice properties in the context of maximizing submodular functions, is the
-\emph{multi-linear} extension:
+to its well-behaved properties in the context of maximizing submodular
+functions, is the \emph{multi-linear} extension:
\begin{equation}\label{eq:multilinear}
F_\mathcal{N}^V(\lambda)
= \mathbb{E}_{S\sim P_\mathcal{N}^\lambda}\left[V(S)\right]
= \sum_{S\subseteq\mathcal{N}} P_\mathcal{N}^\lambda(S) V(S)
\end{equation}
-where $P_\mathcal{N}^\lambda(S)$ is the probability of choosing the set $S$ if
+$P_\mathcal{N}^\lambda(S)$ is the probability of choosing the set $S$ if
we decide for each element in $\mathcal{N}$ to pick it with probability
$\lambda_i$ or to reject it with probability $1-\lambda_i$:
\begin{displaymath}
- P_\mathcal{N}^\lambda = \prod_{i\in S} \lambda_i
+ P_\mathcal{N}^\lambda(S) = \prod_{i\in S} \lambda_i
\prod_{i\in\mathcal{N}\setminus S} 1 - \lambda_i
\end{displaymath}
For knapsack, Chen et al. \cite{chen} directly use the multi-linear extension as
@@ -113,8 +112,8 @@ function. Note that in our case, the multi-linear extension can be written:
Our relaxation follows naturally by swapping the expectation and the $\log\det$
in the above formula:
\begin{align}\label{eq:concave}
- L_\mathcal{N}(\lambda) & \defeq \log\det\mathbb{E}_{S\sim
- P_\mathcal{N}^\lambda} A(S) \\
+ L_\mathcal{N}(\lambda) & \defeq \log\det\left(\mathbb{E}_{S\sim
+ P_\mathcal{N}^\lambda}[A(S)]\right)\notag \\
&= \log\det\left(I_d + \sum_{i\in\mathcal{N}}
\lambda_i x_i\T{x_i}\right)
\end{align}
@@ -158,7 +157,7 @@ We can now state the main result of this section:
\end{displaymath}
we get an approximation ratio of:
\begin{displaymath}
- 1 + C^* = \frac{14e-3 + \sqrt{160e^2-48e + 9}}{2(e-1)}\simeq 18.68
+ 1 + C^* = \frac{14e-3 + \sqrt{160e^2-48e + 9}}{2(e-1)}\simeq 19.68
\end{displaymath}
\end{theorem}
@@ -304,22 +303,6 @@ Our main contribution is lemma~\ref{lemma:relaxation} which gives an
approximation ratio for $L_\mathcal{N}$ and is key in the proof of
theorem~\ref{thm:main}.
-\begin{lemma}\label{lemma:concave}
- The \emph{concave relaxation} $L_\mathcal{N}$ is concave\footnote{Hence
- this relaxation is well-named!
-\stratis{avoid humor}}.
-\end{lemma}
-
-\begin{proof}
- This follows from the concavity of the $\log\det$ function over symmetric
- positive semi-definite matrices. More precisely, if $A$ and $B$ are two
- symmetric positive semi-definite matrices, then:
- \begin{multline*}
- \forall\alpha\in [0, 1],\; \log\det\big(\alpha A + (1-\alpha) B\big)\\
- \geq \alpha\log\det A + (1-\alpha)\log\det B
- \end{multline*}
-\end{proof}
-
It has been observed already (see for example \cite{dughmi}) that the
multi-linear extension presents the cross-convexity property: it is convex along
any direction $e_i-e_j$ where $e_i$ and $e_j$ are two elements of the standard