aboutsummaryrefslogtreecommitdiffstats
path: root/results.tex
diff options
context:
space:
mode:
Diffstat (limited to 'results.tex')
-rw-r--r--results.tex4
1 files changed, 4 insertions, 0 deletions
diff --git a/results.tex b/results.tex
index 974209b..2a21586 100644
--- a/results.tex
+++ b/results.tex
@@ -96,6 +96,10 @@ $k$. I think it would make more sense from the learning perspective to consider
uniformly random sets. In this case, the above approach allows us to conclude
directly.
+More generally, I think the “right” way to think about this is to look at $A$
+as the adjacency matrix of a random $k$-regular graph. There are tons of
+results about the probability of the adjacency matrix to be non singular.
+
\section{Passive Optimization of Additive Functions}
Let $\Omega$ be a universe of elements of size $n$. For $k \in [n]$, let $D_k$ be the uniform probability distribution over all sets of size $k$, and let $D_p$ be the product probability distribution over sets such that $\forall j \in \Omega, \mathbb{P}_{S \sim D_p}(j \in S) = p$. Note that $D_{k}$ and $D_{p}$ for $p \equiv \frac{k}{n}$ are very similar, and will be considered almost interchangeably throughout the following notes.