summaryrefslogtreecommitdiffstats
path: root/abstract.tex
diff options
context:
space:
mode:
authorStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-08 10:13:01 -0700
committerStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-07-08 10:13:01 -0700
commitaff4f327939dd4ddeec81a4024b38e765abba99d (patch)
treed8a237b79522a73dde853fb03c133069f73c20cb /abstract.tex
parent232a91e5359ae2e924f7777159ec62f4a9dcf3c4 (diff)
downloadrecommendation-aff4f327939dd4ddeec81a4024b38e765abba99d.tar.gz
abstract typo
Diffstat (limited to 'abstract.tex')
-rw-r--r--abstract.tex2
1 files changed, 1 insertions, 1 deletions
diff --git a/abstract.tex b/abstract.tex
index bf078d0..54aebec 100644
--- a/abstract.tex
+++ b/abstract.tex
@@ -19,6 +19,6 @@ Each subject $i$ declares an associated cost $c_i >0$ to be part of the experime
mechanism for \SEDP{} with suitable properties.
We present a deterministic, polynomial time, budget feasible mechanism scheme, that is approximately truthful and yields a constant factor approximation to \EDP. In particular, for any small $\delta>0$ and $\varepsilon>0$, we can construct a $(12.98\,,\varepsilon)$-approximate mechanism that is $\delta$-truthful and runs in polynomial time in both $n$ and $\log\log\frac{B}{\epsilon\delta}$.
-By applying previous work on budget feasible mechanisms with submodular objective, one could {\em only} have derived either an exponential time deterministic mechanism or a randomized polynomial time mechanism. Our mechanism yields a constant factor ($\approx 12.68$) approximation, and we show that no truthful, budget-feasible algorithms are possible within a factor $2$ approximation. We also show how to generalize our approach to a wide class of learning problems, beyond linear regression.
+By applying previous work on budget feasible mechanisms with a submodular objective, one could {\em only} have derived either an exponential time deterministic mechanism or a randomized polynomial time mechanism. Our mechanism yields a constant factor ($\approx 12.68$) approximation, and we show that no truthful, budget-feasible algorithms are possible within a factor $2$ approximation. We also show how to generalize our approach to a wide class of learning problems, beyond linear regression.