1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
|
This paper introduces a general framework to find a truthful, (approximately)
revenue-maximizing combinatorial auction mechanism inside a given class of
mechanisms. A typical situation where this is useful is when considering
a parametrized class of mechanisms for which the optimal value of the parameter
depends on the (unknown) distribution of bidders and thus needs to be learned.
The framework is based on the general idea of random sampling, where we learn
the optimal mechanism on a portion of the population but apply it to another
portion. The main idea of the framework—which can be thought of as a way to
make it more robust to variability in the bidders' population—is to first draw
a set of participatory bidders and then learn on the complement set of bidders
by repeatedly drawing samples of bidders from the complement set and finding
a mechanism which maximizes revenue *on average* over the samples. The
mechanism thus identified is then applied to the set of participatory bidders.
In addition to the introduction of this general framework, the main
contribution of the papers are:
1. a general bound on the revenue of the mechanisms output by this method,
which depends on various parameters quantifying the complexity of the
auction class and the variability in the population of bidders.
2. a specific instantiation of the framework to a new class of auctions
introduced by the authors, bundling-boosted auctions, for which it is
possible to quantify more finely the various parameters described in item 1.
3. an algorithm to efficiently find the mechanism maximizing expected revenue
(as required by the framework) for sparse bundling-boosted auctions.
Detailed comments
=================
The problem considered in this paper is important and the paper brings a nice
unifying perspective to it by the introduction of a general framework and
analysis of its performance through learning-theoretic complexity measures.
Furthermore, the paper is very clear and nicely written (despite the sadly
restrictive 7 page limit…).
* an important feature of the framework, is that it completely gives up on
extracting any revenue from non-participatory bidders. While I understand
that this might be a necessary feature given the generality of the scheme,
some prior work partially avoid this loss, albeit in much more restrictive
settings. Maybe more could be said as a motivation about the trade-off
between generality of the scheme and how much revenue we can hope to extract.
* relatedly, I was a bit surprised to see that the bounds on the revenue of the
scheme were expressed with respect to the efficient allocation on the entire
population. Why use this as a benchmark, as opposed to, for example, the
optimal revenue that could be obtained on the entire population? This might
be standard in this line of work but adding a sentence of two about this
choice might be helpful.
* a drawback of the scheme, which again might be due to its generality is that
its guarantees depend of parameters which might be hard to compute or even
bound in practice. Although the authors try to explicitate them for specific
auction classes, they still remain somewhat opaque. This is particularly true
of the partition discrepancy quantity.
* the scheme is reminiscent of the bootstrap method in statistics where
multiple samples are drawn with replacement from the same initial sample to
account for variability in the population. Maybe this could be mentioned
somewhere to connect the paper to the broader statistics/learning literature.
|