aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorThibaut Horel <thibaut.horel@gmail.com>2015-03-18 18:32:02 -0400
committerThibaut Horel <thibaut.horel@gmail.com>2015-03-18 18:32:02 -0400
commitd6da106c03463113bb1356943f6b4084834868ef (patch)
tree7b01bbf57d00cd39e5649d7da754b3463caa17f6
parentf0b3b883fd8f652fdaa245c99d9ccd29edff2dc5 (diff)
downloadlearn-optimize-d6da106c03463113bb1356943f6b4084834868ef.tar.gz
Add small comment
-rw-r--r--results.tex4
1 files changed, 4 insertions, 0 deletions
diff --git a/results.tex b/results.tex
index 974209b..2a21586 100644
--- a/results.tex
+++ b/results.tex
@@ -96,6 +96,10 @@ $k$. I think it would make more sense from the learning perspective to consider
uniformly random sets. In this case, the above approach allows us to conclude
directly.
+More generally, I think the “right” way to think about this is to look at $A$
+as the adjacency matrix of a random $k$-regular graph. There are tons of
+results about the probability of the adjacency matrix to be non singular.
+
\section{Passive Optimization of Additive Functions}
Let $\Omega$ be a universe of elements of size $n$. For $k \in [n]$, let $D_k$ be the uniform probability distribution over all sets of size $k$, and let $D_p$ be the product probability distribution over sets such that $\forall j \in \Omega, \mathbb{P}_{S \sim D_p}(j \in S) = p$. Note that $D_{k}$ and $D_{p}$ for $p \equiv \frac{k}{n}$ are very similar, and will be considered almost interchangeably throughout the following notes.