From d6da106c03463113bb1356943f6b4084834868ef Mon Sep 17 00:00:00 2001 From: Thibaut Horel Date: Wed, 18 Mar 2015 18:32:02 -0400 Subject: Add small comment --- results.tex | 4 ++++ 1 file changed, 4 insertions(+) (limited to 'results.tex') diff --git a/results.tex b/results.tex index 974209b..2a21586 100644 --- a/results.tex +++ b/results.tex @@ -96,6 +96,10 @@ $k$. I think it would make more sense from the learning perspective to consider uniformly random sets. In this case, the above approach allows us to conclude directly. +More generally, I think the “right” way to think about this is to look at $A$ +as the adjacency matrix of a random $k$-regular graph. There are tons of +results about the probability of the adjacency matrix to be non singular. + \section{Passive Optimization of Additive Functions} Let $\Omega$ be a universe of elements of size $n$. For $k \in [n]$, let $D_k$ be the uniform probability distribution over all sets of size $k$, and let $D_p$ be the product probability distribution over sets such that $\forall j \in \Omega, \mathbb{P}_{S \sim D_p}(j \in S) = p$. Note that $D_{k}$ and $D_{p}$ for $p \equiv \frac{k}{n}$ are very similar, and will be considered almost interchangeably throughout the following notes. -- cgit v1.2.3-70-g09d2