summaryrefslogtreecommitdiffstats
path: root/ec/reviews.txt
blob: 081da89e3cffcdc449ff3576631a4497b611851d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
eview from Reviewer 1 -------------
Evaluation of work and contribution           : 5
Significance                                  : 5
Originality and novelty                       : 5
Relevance to the call of papers               : 5
Readability and organization                  : 5
Technical quality                             : 5
Overall recommendation                        : 5
Confidence                                    : 5
A talk on this paper would be of interest to a broad cross-section of the EC community : 5

-- Comments to the author(s):
SPC META-REVIEW: Please disregard scores.
-- Summary:
The discussion of the paper centered on two issues.



1. It's not clear that the mechanism presented is truthful. It's clear that it's epsilon-truthful for arbitrarily small epsilon, but the paper does not explain how to make it truthful "on the nose." The issue is that the algorithm for finding OPT_{-i*} needs to solve a convex program. The value of OPT_{-i^*} is not necessarily a multiple of the increments between different bids. Even worse, it is easy to construct convex programs where the optimal value is irrational even when all the parameters of the program are rational. It is therefore incumbent on the authors to state that the solution concept is epsilon-truthfulness for some value of epsilon that appears in the algorithm's running time, or to explain whether there is a way to modify the algorithm to restore exact truthfulness.



2. Some of the PC members found the significance of the results to be underwhelming, since a universally truthful mechanism using only one bit of randomness was already known for (a generalization of) this problem.



That having been said, the PC also valued the conceptual contribution this paper made in providing a nice model for information procurement, as well as the technical innovations used in designing the deterministic mechanism.


---------- End of Review from Reviewer 1 ----------

------------- Review from Reviewer 2 -------------
Evaluation of work and contribution           : 6
Significance                                  : 5
Originality and novelty                       : 6
Relevance to the call of papers               : 8
Readability and organization                  : 6
Technical quality                             : 6
Overall recommendation                        : 3
Confidence                                    : 10
A talk on this paper would be of interest to a broad cross-section of the EC community : 5

-- Comments to the author(s):
The paper considers designing truthful mechanisms for conducting experiments. Specifically, the designer would like to select subjects to test some hypothesis. Each subject has a (private) cost, but the designer has some limited budget B. The goal is therefore to truthfully select a set of subjects that will minimize the uncertainty of the tested hypothesis while paying no more than the total budget.



This problem belongs to the set of problems known as budget-constrained mechanism design. If the objective function is submodular, then there is a poly time randomized algorithm that gives a constant approximation, and a deterministic exponential time algorithm that gives a constant approximation. The objective function considered in the current paper is indeed submodular, and the only goal of the paper is to provide a deterministic poly time mechanism.



The standard randomized algorithm uses only one random bit. This bit is used to guess whether there is a single subject that contributes most of the value to the optimal solution or not. A well-known observation that was applied many times before in this context is that if one can estimate "in a monotone way" the optimal solution without one player, then one can derandomize the randomized algorithm. The technical contribution of the paper is to provide this estimation.



In general, the setting of the paper might make sense. However, I don't really buy the claim that derandomization is important in experimental design:



1) Randomization is inherent in the experimental design, e.g., randomly selecting a control group, repeating the experiment several times. And if we can use randomization anyways, we can just use the existing truthful algorithms for maximizing submodular functions subject to a budget constraint.



2) The subjects should be selected randomly from the population, but the paper makes no assumption about their costs (worst case analysis). This is not just a strange assumption; it might also introduce a selection bias - maybe there is something special about subjects with, e.g., small costs?



Also, what is the meaning of giving a constant approximation for the objective function? I can understand what does a 2-approximation to traditional goals such as revenue of welfare means, but what does it mean for this strange function? Is the degradation in quality linear in the approximation ratio? This is not discussed at all in the paper.



I'm also worried about the truthfulness of the mechanism. Truthfulness is proved for Algorithm 1, but in the proofs it seems that Algorithm 1 is not the real algorithm, since later some approximate solution is used (Lemma 5.1). It is well known that using such approximations might not preserve truthfulness. This might not be an issue in the current setting, but a proof should be provided if this is indeed the case.
-- Summary:
.
---------- End of Review from Reviewer 2 ----------

------------- Review from Reviewer 3 -------------
Evaluation of work and contribution           : 5
Significance                                  : 5
Originality and novelty                       : 5
Relevance to the call of papers               : 10
Readability and organization                  : 7
Technical quality                             : 6
Overall recommendation                        : 5
Confidence                                    : 5
A talk on this paper would be of interest to a broad cross-section of the EC community : 8

-- Comments to the author(s):
The authors consider a special case of the submodular budgeted procurement problem introduced by Singer [2010]. In the problem considered in this paper, an experimenter seeks to procure the participation of "subjects", the data points, for an experiment, with the goal of reconstructing a linear model through regression. The experimenter has a budget, and the subjects have a private cost for participating in the experiment. The authors' model assumes that data procured from the subjects is subject to Gaussian noise, and moreover the experimenter's prior over the linear model is also Gaussian. Under these assumptions, the relative entropy, aka the information gain, as a function of the participants in the experiment is a submodular function. Therefore, the budgeted variant of the problem in the strategic setting fits in the framework of Singer [2010].



As the authors point out, there already exists a randomized, polynomial time, universally truthful, constant approximation for the general submodular budgeted procurement problem. The authors' result strengthens this to a deterministic mechanism, albeit with a slightly worse constant factor approximation. The approach of the authors has been used by Singer and Chen et al for different special cases of the submodular procurement problem, and requires the ability to estimate the full-information optimum value to within a constant factor. As in those prior works, the authors do so by constructing a specialized relaxation of their problem satisfying certain properties. The main technical contribution is the formulation of this relaxation, and proving its integrality gap via a connection to the multilinear relaxation of the submodular relative entropy function. The proof of this Lemma is nontrivial and interesting.



However, there are several factors that leave me unexcited about the contributions of this paper. First and foremost, the result is fairly incremental: there already exists a constant factor, _universally_ truthful, mechanism for this problem from prior work due to Singer. Making it deterministic is somehow underwhelming, especially since Singer's result is _barely_ randomized, in the sense that it randomizes over two outcomes, one of which is the unlikely outcome that a single participant in the experiment dominates the optimum value. Second, the model makes several strong assumptions: the noise is Gaussian, and the prior is Gaussian. Relaxing these assumptions would be nice. Last and least, the authors' exposition of their model in Section 3 leaves too much to the reader -- I found myself doing too much matrix algebra to verify the expressions derived by the authors, and to gain some intuition as to their meaning. I'm sure this would be elementary to a statistician, but for
  an EC a
   udience the presentation should be much more self contained.


   -- Summary:
   To summarize, whereas the technical work in this paper is interesting, the conceptual contribution is not particularly exciting. I would say this is a marginal accept.
   ---------- End of Review from Reviewer 3 ----------

   ------------- Review from Reviewer 4 -------------
   Evaluation of work and contribution           : 8
   Significance                                  : 8
   Originality and novelty                       : 8
   Relevance to the call of papers               : 10
   Readability and organization                  : 9
   Technical quality                             : 8
   Overall recommendation                        : 8
   Confidence                                    : 10
   A talk on this paper would be of interest to a broad cross-section of the EC community : 10

   -- Comments to the author(s):
   This paper studies the problem of how to design incentive compatible mechanisms for experimental design in the budget feasibility model.  In this model there are multiple strategic agents, each selling some service and a buyer with a budget and a combinatorial utility function defined over the agents' services.  The objective is to design an incentive compatible mechanism that maximizes the buyer's utility function s.t. the sum of payments made to agents does not exceed the budget.  The paper considers the buyer's objective to be a standard model of D-optimal design where the objective is to select a subset of experiments that minimize the error covariance matrix.  The main result in the paper is a deterministic budget feasible mechanism for the experiment design problem which is guaranteed to be within a factor of 12.68 of the optimal omniscient solution (i.e. the solution that can be obtained without computational limitation and when knowing all agents' costs apriori).



   The paper addresses the problem of procuring information efficiently in strategic setting.  The model suggested in the paper is novel and interesting.  It captures settings in which the objective is to efficiently buy information from strategic agents, and could have many applications in data markets, crowdsourcing, and recommendation systems.



   From a technical perspective, the main result in this paper introduces new techniques that could be of general interest.  The main result in the paper is a deterministic budget feasible mechanism for the experiment design problem.  The main technical component in this result is a construction of a maximum operator that is used by the mechanism which is necessary to obtain a constant factor approximation guarantee without breaking truthfulness.  This operator is constructed using a framework that was previously introduced in this context, though the techniques are novel.



   The main idea is to construct a concave relaxation L so that its optimal solution could be obtained in polynomial time and is approximately close to the proportional share solution (the main workhorse of the budget feasibility framework).  Showing that L is approximately close to the proportional share solution is done by showing an integrality gap of 2 via a sophisticated technique.  By using pipage rounding, the authors show that the multilinear relaxation F can round any solution -- and in particular the optimal solution to the concave relaxation -- without decreasing its value in F.  The authors then show that the multilinear relaxation F approximates the concave relaxation L by bounding the partial derivatives of the ratio between L and F.  This is an interesting and new approach that can be applied in other settings as well.







   Comments to the authors:



   1. The paper needs to provide a concrete example showing that the maximum operator breaks incentive compatibility.  The main technical content of the paper shows a nice way to approximate the maximum operator between two solutions, though there is no example for why taking the maximum of the two solutions breaks incentive compatibility.



   It is known that for additive functions, one can use the maximum operator directly without loosing incentive compatibility; The example in which using the maximum operator breaks incentive compatibility is when the buyer has a coverage function (and hence a special case of submodular).  In the problem the paper studies the buyer's valuation function is more general than additive and is a special case of submodular.  Hence, without an example for why applying the maximum directly breaks incentive compatibility, there is no justification for using an approximate operator.  This issue can be easily resolved as well by providing such an example.



   2. Since additive functions is a special case of the Experiment Design objective, the lower bound in 5.3 seems redundant due to Singer 2010.  Also, for additive functions a better lower bound can be obtained of 2 + sqrt{2} due to Chen et. al 2011.








   -- Summary:
   In summary, this is an interesting paper that does a very nice job in modeling information procurement.  It seems likely it will have many applications in domains that are relevant to the EC community.  From a technical perspective it is also quite interesting and introduces a nice rounding technique that can be further used in other areas.
   ---------- End of Review from Reviewer 4 ----------