summaryrefslogtreecommitdiffstats
path: root/nips-2016-1303.txt
diff options
context:
space:
mode:
Diffstat (limited to 'nips-2016-1303.txt')
-rw-r--r--nips-2016-1303.txt53
1 files changed, 53 insertions, 0 deletions
diff --git a/nips-2016-1303.txt b/nips-2016-1303.txt
new file mode 100644
index 0000000..77b67f5
--- /dev/null
+++ b/nips-2016-1303.txt
@@ -0,0 +1,53 @@
+Summary
+=======
+
+The paper studies the problem of maximizing a submodular set function under
+multiple knapsack (packing/linear) constraints. The authors build on a now
+standard technique: optimizing the multilinear extension of the submodular
+function and then rounding the fractional solution using the "contention
+resolution scheme" framework. The key novelty is to observe that in the
+continuous greedy algorithm, the step size can be chosen proportional
+1/(s+l+1)^3 where s is the upper-bound on the cardinality of the solution set
+and l is the number of packing constraints. Furthermore, it is shown that the
+algorithm can be implemented in such a way that the running solution has
+sparsity (s+l+1) which leads to faster updates.
+
+The suggested algorithms are evaluted experimentally and compared to the
+"threshold greedy algorithm" of [8].
+
+Comments
+========
+
+The problem of maximizing a submodular function under packing constraints
+is important and interesting and there is definitely value in obtaining
+a better understanding of the computational complexity of this problem under
+various settings. My main concern is that this paper seems to be an incremental
+improvement over already known results:
+
+* the paper mentions "improved algorithms" (plural) but there is only one
+algorithm which is explicitly stated (algorithm 1). Furthermore, this
+algorithm is the standard continuous greedy algorithm from [15] with a specific
+accounting of the sparsity of the solution. I am assuming that the other
+algorithm is the Multiplicative Weight algorithm, but the authors simply refer
+to [5] for this algorithm, so it can't be considered novel.
+
+* instead of mentioning "improved algorithms", I think it would be more
+accurate to say "improved analysis". To my understanding, the algorithms are
+the same, only the analysis is refined to properly account for the sparsity of
+the solution.
+
+Furthermore, this paper is lacking the following two things:
+
+* a precise statement (in a theorem/proposition) of the end-to-end
+ computational complexity of the suggested algorithm. This is important if the
+ main contribution is improved running times.
+
+* an explanation of why the multiplicative weight algorithm has worse running
+ time in the experiments, even though the point of this algorithm is to
+ improve the running time of the continuous greedy algorithm. This is
+ counter-intuitive since I would expect the LP solving part of the algorithm
+ to dominate the running time (and the multiplicative weight algorithm
+ specifically addresses this). Is there any reason at all to prefer the
+ multiplicative weight algorithm? Also, it would be interesting in the
+ experiment to show a breakdown of the running time: which fraction of the
+ time is spent sloving LPs?