diff options
| -rw-r--r-- | general.tex | 6 | ||||
| -rw-r--r-- | main.tex | 6 |
2 files changed, 6 insertions, 6 deletions
diff --git a/general.tex b/general.tex index 46337d6..3815c83 100644 --- a/general.tex +++ b/general.tex @@ -45,7 +45,7 @@ analysis of the approximation ratio, we get the following result which extends Theorem~\ref{thm:main}: \begin{theorem} - There exists a truthful and budget feasible mechanism for the objective + There exists a truthful, individually rational and budget feasible mechanism for the objective function $\tilde{V}$ given by \eqref{eq:normalized}. Furthermore, for any $\varepsilon > 0$, in time $O(\text{poly}(|\mathcal{N}|, d, \log\log \varepsilon^{-1}))$, the algorithm computes a set $S^*$ such that: @@ -64,8 +64,8 @@ Selecting experiments that maximize the information gain in the Bayesian setup l where $h\in \mathcal{H}$ for some subset $\mathcal{H}$ of all possible mappings $h:\Omega\to\reals$, called the \emph{hypothesis space}, and $\varepsilon_i$ are random variables in $\reals$, not necessarily identically distributed, that are independent \emph{conditioned on $h$}. This model is quite broad, and captures many learning tasks, such as: \begin{enumerate} \item\textbf{Generalized Linear Regression.} $\Omega=\reals^d$, $\mathcal{H}$ is the set of linear maps $\{h(x) = \T{\beta}x \text{ s.t. } \beta\in \reals^d\}$, and $\varepsilon_i$ are independent zero-mean normal variables, where $\expt{\varepsilon_i^2}=\sigma_i$. -\item\textbf{Logistic Regression.} $\Omega=\reals^d$, $\mathcal{H}$ is the set of maps $\{h(x) = \frac{e^{\T{\beta} x}}{1+e^{\T{\beta} x}} \text{ s.t. } \beta\in\reals^d\}$, and $\varepsilon_i$ are independent conditioned on $h$ such that $$\varepsilon_i=\begin{cases} 1- h(x_i),& \text{w.~prob.}~h(x_i)\\-h(x_i),&\text{w.~prob.}~1-h(x_i)\end{cases}$$ -\item\textbf{Learning Binary Functions with Bernoulli Noise.} $\Omega = \{0,1\}^d$, and $\mathcal{H}$ is some subset of $2^{\Omega\times\{0,1\}}$, and $$\varepsilon_i =\begin{cases}0, &\text{w.~prob.}~p\\\bar{h}(x_i)-h(x_i), \text{w.~prob.}~1-p\end{cases}$$ +\item\textbf{Logistic Regression.} $\Omega=\reals^d$, $\mathcal{H}$ is the set of maps $\{h(x) = \frac{e^{\T{\beta} x}}{1+e^{\T{\beta} x}} \text{ s.t. } \beta\in\reals^d\}$, and $\varepsilon_i$ are independent conditioned on $h$ such that $$\varepsilon_i=\begin{cases} 1- h(x_i),& \text{w.~prob.}\;h(x_i)\\-h(x_i),&\text{w.~prob.}\;1-h(x_i)\end{cases}$$ +\item\textbf{Learning Binary Functions with Bernoulli Noise.} $\Omega = \{0,1\}^d$, and $\mathcal{H}$ is some subset of $2^{\Omega}$, and $$\varepsilon_i =\begin{cases}0, &\text{w.~prob.}\;p\\\bar{h}(x_i)-h(x_i), &\text{w.~prob.}\;1-p\end{cases}$$ \end{enumerate} In this setup, assume that the experimenter has a prior distribution on the hypothesis $h\in \mathcal{H}$. Then, the information gain objective can be written again as the mutual information between $\beta$ and $y_S$. @@ -87,7 +87,7 @@ problems, Chen et al.~\cite{chen} %for \textsc{Knapsack} and Singer propose comparing $V(i^*)$ to %$OPT(R_{\mathcal{N}\setminus\{i\}}, B)$, where $R$ denotes the optimal value of a \emph{fractional relaxation} of the function $V$ over the set -$\mathcal{N}$. A fuction $R:[0, 1]^{n}\to\reals_+$ defined on the hypercube $[0, 1]^{n}$ is a fractional relaxation of $V$ +$\mathcal{N}$. A function $R:[0, 1]^{n}\to\reals_+$ defined on the hypercube $[0, 1]^{n}$ is a fractional relaxation of $V$ over the set $\mathcal{N}$ if %(a) $R$ is a function defined on the hypercube $[0, 1]^{n}$ and (b) $R(\id_S) = V(S)$ for all $S\subseteq\mathcal{N}$, where $\id_S$ denotes the indicator vector of $S$. The optimization program @@ -184,7 +184,7 @@ characterization from Singer \cite{singer-mechanisms} gives a formula to We can now state our main result: \begin{theorem}\label{thm:main} - The allocation described in Algorithm~\ref{mechanism}, along with threshold payments, is truthful, individiually rational + The allocation described in Algorithm~\ref{mechanism}, along with threshold payments, is truthful, individually rational and budget feasible. Furthermore, for any $\varepsilon>0$, the mechanism has complexity $O\left(\text{poly}(n, d, \log\log \varepsilon^{-1})\right)$ and returns a set $S^*$ such that: @@ -199,7 +199,7 @@ We can now state our main result: Note that this implies we construct a poly-time mechanism with accuracy arbitrarily close to 19.68, by taking $\varepsilon = \ldots$\stratis{fix me}. In addition, we prove the following lower bound. \begin{theorem}\label{thm:lowerbound} -There is no $2$-approximate, truthful, budget feasible, individionally rational mechanism for EDP. +There is no $2$-approximate, truthful, budget feasible, individually rational mechanism for EDP. \end{theorem} %\stratis{move the proof as appropriate} \begin{proof} |
