diff options
| author | Stratis Ioannidis <stratis@stratis-Latitude-E6320.(none)> | 2013-07-08 10:17:04 -0700 |
|---|---|---|
| committer | Stratis Ioannidis <stratis@stratis-Latitude-E6320.(none)> | 2013-07-08 10:17:04 -0700 |
| commit | 45246ef33fb32056fcf8da3469087b7c9a3a506b (patch) | |
| tree | b1bb25dec9b288fa5808d439f0258527b18a21b1 | |
| parent | aff4f327939dd4ddeec81a4024b38e765abba99d (diff) | |
| download | recommendation-45246ef33fb32056fcf8da3469087b7c9a3a506b.tar.gz | |
abstract polish
| -rw-r--r-- | abstract.tex | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/abstract.tex b/abstract.tex index 54aebec..5aa4bdb 100644 --- a/abstract.tex +++ b/abstract.tex @@ -19,6 +19,6 @@ Each subject $i$ declares an associated cost $c_i >0$ to be part of the experime mechanism for \SEDP{} with suitable properties. We present a deterministic, polynomial time, budget feasible mechanism scheme, that is approximately truthful and yields a constant factor approximation to \EDP. In particular, for any small $\delta>0$ and $\varepsilon>0$, we can construct a $(12.98\,,\varepsilon)$-approximate mechanism that is $\delta$-truthful and runs in polynomial time in both $n$ and $\log\log\frac{B}{\epsilon\delta}$. -By applying previous work on budget feasible mechanisms with a submodular objective, one could {\em only} have derived either an exponential time deterministic mechanism or a randomized polynomial time mechanism. Our mechanism yields a constant factor ($\approx 12.68$) approximation, and we show that no truthful, budget-feasible algorithms are possible within a factor $2$ approximation. We also show how to generalize our approach to a wide class of learning problems, beyond linear regression. +By applying previous work on budget feasible mechanisms with a submodular objective, one could {\em only} have derived either an exponential time deterministic mechanism or a randomized polynomial time mechanism. We also establish that no truthful, budget-feasible algorithms are possible within a factor $2$ approximation, and show how to generalize our approach to a wide class of learning problems, beyond linear regression. |
