blob: 77b67f54f9c4334179cffec69357c4f1a4b75d56 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
|
Summary
=======
The paper studies the problem of maximizing a submodular set function under
multiple knapsack (packing/linear) constraints. The authors build on a now
standard technique: optimizing the multilinear extension of the submodular
function and then rounding the fractional solution using the "contention
resolution scheme" framework. The key novelty is to observe that in the
continuous greedy algorithm, the step size can be chosen proportional
1/(s+l+1)^3 where s is the upper-bound on the cardinality of the solution set
and l is the number of packing constraints. Furthermore, it is shown that the
algorithm can be implemented in such a way that the running solution has
sparsity (s+l+1) which leads to faster updates.
The suggested algorithms are evaluted experimentally and compared to the
"threshold greedy algorithm" of [8].
Comments
========
The problem of maximizing a submodular function under packing constraints
is important and interesting and there is definitely value in obtaining
a better understanding of the computational complexity of this problem under
various settings. My main concern is that this paper seems to be an incremental
improvement over already known results:
* the paper mentions "improved algorithms" (plural) but there is only one
algorithm which is explicitly stated (algorithm 1). Furthermore, this
algorithm is the standard continuous greedy algorithm from [15] with a specific
accounting of the sparsity of the solution. I am assuming that the other
algorithm is the Multiplicative Weight algorithm, but the authors simply refer
to [5] for this algorithm, so it can't be considered novel.
* instead of mentioning "improved algorithms", I think it would be more
accurate to say "improved analysis". To my understanding, the algorithms are
the same, only the analysis is refined to properly account for the sparsity of
the solution.
Furthermore, this paper is lacking the following two things:
* a precise statement (in a theorem/proposition) of the end-to-end
computational complexity of the suggested algorithm. This is important if the
main contribution is improved running times.
* an explanation of why the multiplicative weight algorithm has worse running
time in the experiments, even though the point of this algorithm is to
improve the running time of the continuous greedy algorithm. This is
counter-intuitive since I would expect the LP solving part of the algorithm
to dominate the running time (and the multiplicative weight algorithm
specifically addresses this). Is there any reason at all to prefer the
multiplicative weight algorithm? Also, it would be interesting in the
experiment to show a breakdown of the running time: which fraction of the
time is spent sloving LPs?
|