1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
|
----------------------- REVIEW 1 ---------------------
PAPER: 174
TITLE: Budget Feasible Mechanisms for Experimental Design
AUTHORS: Thibaut Horel, Stratis Ioannidis and S Muthukrishnan
----------- REVIEW -----------
This paper considers a special case of budget feasible mechanism
design. In the general problem, there is a cost c_i associated with
each element of ground set i. There is a budget B, and for any set of
elements S, there is a value f(S) that is submodular. The goal is to
choose a subset S of maximum value subject to the cost being at most
B. The twist is that the agents owning the elements can misreport the
cost, and the goal is to design a mechanism + payment scheme that
truthfully approximately maximizes the value. For this problem, Singer
and Chen et al. provide a 7.92 competitive randomized universally
truthful mechanism.
The current paper considers a special case where the agents are
experimental subjects, and the value of a set of agents is the
information gain metric used in linear regression. The authors present
a 13 competitive *deterministic* mechanism via convex optimization.
The problem statement is nice and practically motivated - one often
seeks to find a diverse set of experimental subjects to maximize
utility. However, it is not clear if misreporting costs is really a
practical consideration - the authors should motivate this better with
citations and real-world examples. Furthermore, given that a very
simple greedy 8 approximation exists, I don't see the need for a
complicated 13 approximation - granted it is deterministic, but
approximation in expectation versus worst case does not seem like a
big enough deal.
----------------------- REVIEW 2 ---------------------
PAPER: 174
TITLE: Budget Feasible Mechanisms for Experimental Design
AUTHORS: Thibaut Horel, Stratis Ioannidis and S Muthukrishnan
----------- REVIEW -----------
The paper considers designing truthful mechanisms for conducting
experiments. Specifically, the designer would like to select subjects
to test some hypothesis. Each subject has a (private) cost, but the
designer has some limited budget B. The goal is therefore to
truthfully select a set of subjects that minimizes the uncertainty of
the tested hypothesis while paying no more than the total budget.
This problem belongs to the set of problems known as
budget-constrained mechanism design. If the objective function is
submodular, then there is a poly time randomized algorithm that gives
a constant approximation, and a deterministic exponential time
algorithm that gives a constant approximation. The objective function
considered in the current paper is indeed submodular, and the only
goal of the paper is to provide a deterministic poly time mechanism
that is **approximately** truthful.
The standard randomized algorithm uses only one random bit. This bit
is used to guess whether there is a single subject that contributes
most of the value to the optimal solution or not. A well-known
observation that was applied many times before in this context is that
if one can estimate “in a monotone way” the optimal solution without
one player, then one can derandomize the randomized algorithm. The
technical contribution of the paper is to provide this estimation in a
way that is “almost” monotone. This leaves us with an approximately
truthful mechanism.
Thus, from a technical point of view I am not very impressed with the
results of the paper: instead of using one bit of randomization or
exponential time the paper provides an approximate truthful mechanism
– not a very attractive tradeoff.
In general, the setting of the paper might make sense. However, I
don’t really buy the claim that derandomization is important in
experimental design:
1) Randomization is inherent in the experimental design, e.g.,
randomly selecting a control group, repeating the experiment several
times. And if we can use randomization anyways, we can just use the
existing truthful algorithms for maximizing submodular functions
subject to a budget constraint.
2) The subjects should be selected randomly from the population, but
the paper makes no assumption about their costs (worst case analysis).
This is not just a strange assumption; it might also introduce a
selection bias – maybe there is something special about subjects with,
e.g., small costs?
Also, what is the meaning of giving a constant approximation for the
objective function? I can understand what does a 2-approximation to
traditional goals such as revenue of welfare means, but what does it
mean for this strange function? Is the degradation in quality linear
in the approximation ratio? This is not discussed at all in the paper.
Finally, the paper proves a lower bound of 2 for truthful mechanism.
Does this bound also holds for approximately truthful mechanisms?
----------------------- REVIEW 3 ---------------------
PAPER: 174
TITLE: Budget Feasible Mechanisms for Experimental Design
AUTHORS: Thibaut Horel, Stratis Ioannidis and S Muthukrishnan
----------- REVIEW -----------
The paper studies budget feasible mechanism for a special submodular
function, called the experimental design problem,
and gives an approximate-truthful mechanism with a constant
approximation ratio plus an extra additive loss.
The experimental design problem itself is pretty interesting and is a
nice motivation for submodular functions.
The mechanism combines approaches from both budget feasible mechanism
design and convex relaxation of the log det objective function in the
previous work, and looks correct to me.
My main concern for the paper is also about the studied experimental design
problem. I feel the problem is too specific and the approach is unnecessarily
involved. (Specifically, the relaxation and rounding techniques introduce
losses in truthfulness.) It would be nice if a deterministic constant
approximation mechanism were given for the whole class of submodular functions.
Overall, I think the paper contributes to the field of budget feasible
mechanism design and (probably) experimental design. However, the
specific studied objective function, used approaches, and the
relaxation of truthfulness make the paper less exciting.
PS. I don't understand the last sentence of the 2nd paragraph of
Section 3.3. Why the allocation needs to be maximal, and why
truthfulness precludes this goal? Please explain.
|