aboutsummaryrefslogtreecommitdiffstats
path: root/finale
diff options
context:
space:
mode:
authorjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-10-14 13:25:14 -0400
committerjeanpouget-abadie <jean.pougetabadie@gmail.com>2015-10-14 13:25:14 -0400
commit67607518a47aed0ee63009695eb81979ab05ca92 (patch)
tree8749e49131f77d3888e48285fb39aa55a0701fb6 /finale
parent0841cb8a8d24c13d2579e494699409f6be6b98cc (diff)
downloadcascades-67607518a47aed0ee63009695eb81979ab05ca92.tar.gz
adding reference + notation fix
Diffstat (limited to 'finale')
-rw-r--r--finale/project_proposal.tex25
1 files changed, 20 insertions, 5 deletions
diff --git a/finale/project_proposal.tex b/finale/project_proposal.tex
index 912b281..5e0d21c 100644
--- a/finale/project_proposal.tex
+++ b/finale/project_proposal.tex
@@ -14,7 +14,7 @@ The network inference problem concerns itself with learning the edges and the
edge weights of an unknown network. Each edge weight $\theta_e, e\in E$ is a
parameter to be estimated. The information at our disposal is the result of a
cascade process on the network. Here, we will focus on the Generalized Linear
-Cascade (GLC) model introduced in~\cite{} presented below.
+Cascade (GLC) model introduced in~\cite{paper} presented below.
\paragraph{The GLC model}
@@ -56,8 +56,8 @@ linear model. In particular, if $f$ is the sigmoid function, we are performing
logistic regression:
$$
\begin{cases}
-y_i^* = \theta_i \cdot x^t + \epsilon, \text{~where~} \epsilon\sim
-Logistic(0, 1) \\
+y_i^* = \theta_i \cdot x_i + \epsilon \text{~where~} \epsilon\sim
+Logistic(0, 1) \text{~and~} x = {(x^t)}_{t \in \mathcal{T}_i}\\
y_i = [y_i^* > 0] \text{~where~} y_i^t = x_i^{t+1}
\end{cases}
$$
@@ -72,10 +72,25 @@ Can you intuitively link certain node-level/graph-level properties with the
resulting variance on the estimated parameter?
\item Do the previous observations correspond with the theoretical result, given
by the Fisher information matrix: $$\hat \beta \sim \mathcal{N}(\beta,
-I{(\theta)}^{-1})$$ where $I(\theta) = - \left(\frac{\partial^2\log
-\mathcal{L}}{\partial \theta^2} \right)^{-1}$
+I{(\theta)}^{-1})$$ where $I(\theta) = - {\left(\frac{\partial^2\log
+\mathcal{L}}{\partial \theta^2} \right)}^{-1}$
\item Are there networks in which the Fisher information matrix is singular?
What happens to the estimation of $\beta$ in this case?
+\item What if the generative process is generated with a different link
+function? Is there a regularization scheme which can mitigate any bias/exploding
+variance in the estimated parameters?
\end{itemize}
+\subsection*{Program plan}
+
+The project will be a series of simulations to answer each of the above
+questions? When possible, we will try to explain the results found in the
+simulation with a simplified analysis on toy-networks. Thibaut and I have worked
+together in the past, and have kept our contributions balanced.
+
+\begin{thebibliography}{1}
+\bibitem{paper} Pouget-Abadie, J. and Horel, T. \emph{Inferring Graphs from
+Cascades: A Sparse Recovery Framework}, ICML 2015
+\end{thebibliography}
+
\end{document}