summaryrefslogtreecommitdiffstats
path: root/abstract.tex
diff options
context:
space:
mode:
authorStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-02-11 09:37:09 -0800
committerStratis Ioannidis <stratis@stratis-Latitude-E6320.(none)>2013-02-11 09:37:09 -0800
commit8c09cfd7da709aab03fb004b58ecd8e1eb4fb553 (patch)
treefe24e8514094cfcd172fce175bf6df60d0031d9a /abstract.tex
parent114a6b8eac3e6addebe84b831c5eafbec7bc9ef4 (diff)
downloadrecommendation-8c09cfd7da709aab03fb004b58ecd8e1eb4fb553.tar.gz
muthu
Diffstat (limited to 'abstract.tex')
-rw-r--r--abstract.tex19
1 files changed, 11 insertions, 8 deletions
diff --git a/abstract.tex b/abstract.tex
index f005dbf..eebeb97 100644
--- a/abstract.tex
+++ b/abstract.tex
@@ -1,21 +1,24 @@
%We initiate the study of mechanisms for \emph{experimental design}.
In the classical {\em experimental design} setting,
-an experimenter \E\ with a budget $B$ has access to a population of $n$ potential experiment subjects $i\in \{1,\ldots,n\}$, each associated with a vector of features $x_i\in\reals^d$ as well as a cost $c_i>0$.
+an experimenter \E\
+%with a budget $B$
+has access to a population of $n$ potential experiment subjects $i\in \{1,\ldots,n\}$, each associated with a vector of features $x_i\in\reals^d$.
+%as well as a cost $c_i>0$.
Conducting an experiment with subject $i$ reveals an unknown value $y_i\in \reals$ to \E. \E\ typically assumes some
hypothetical relationship between $x_i$'s and $y_i$'s, \emph{e.g.}, $y_i \approx \T{\beta} x_i$, and estimates
$\beta$ from experiments.
%conducting the experiments and obtaining the measurements $y_i$ allows
%\E\ can estimate $\beta$.
-\E 's goal is to select which experiments to conduct, subject to her budget constraint.
+As a proxy for various practical constraints, \E{} may select subjects to select for the experiments.
+%\E 's goal is to select which experiments to conduct, subject to her budget constraint.
%, to obtain the best estimate possible for $\beta$.
-We initiate the study of mechanisms for experimental design. In this setting,
-subjects are \emph{strategic} and may lie about their costs. In particular, we formulate the {\em Experimental Design Problem} (\EDP) as finding a set $S$ of subjects that maximize $V(S) = \log\det(I_d+\sum_{i\in S}x_i\T{x_i})$ under the constraint $\sum_{i\in S}c_i\leq B$; our objective function corresponds to the information gain in $\beta$ when it is learned through linear regression methods, and is related to the so-called $D$-optimality criterion. We present the first known
-deterministic, polynomial time, truthful, budget feasible mechanism for \EDP{}.
-Our mechanism yields a constant factor ($\approx 19.68$) approximation, and we show that no truthful, budget-feasible algorithms are possible within a factor 2 approximation.
-Our approach here generally applies to a wider class of learning problems and
-obtains polynomial time universally truthful (\emph{i.e.}, randomized) budget feasible mechanism, also within a constant factor approximation.
+We initiate the study of budgeted mechanisms for experimental design. In this setting, \E{} has a budget $B$.
+Each subject $i$ declares associated cost $c_i >0$ to be part of the experiment, and must be paid at least their cost. Further, the subjects
+are \emph{strategic} and may lie about their costs . In particular, we formulate the {\em Strategic Experimental Design Problem} (\SEDP) as finding a set $S$ of subjects for the experiment that maximizes $V(S) = \log\det(I_d+\sum_{i\in S}x_i\T{x_i})$ under the constraint $\sum_{i\in S}c_i\leq B$; our objective function corresponds to the information gain in parameter $\beta$ that is learned through linear regression methods, and is related to the so-called $D$-optimality criterion.
+We present a deterministic, polynomial time, truthful, budget feasible mechanism for \EDP{}.
+By applying previous work on budget feasible mechanisms with submodular objective, one could have derived either an exponential time deterministic mechanism or a randomized polynomial time mechanism. Our mechanism yields a constant factor ($\approx 12.68$) approximation, and we show that no truthful, budget-feasible algorithms are possible within a factor $2$ approximation. We also show how to apply our approach to a wide class of learning problems.