diff options
| author | jeanpouget-abadie <jean.pougetabadie@gmail.com> | 2015-11-06 13:24:37 -0500 |
|---|---|---|
| committer | jeanpouget-abadie <jean.pougetabadie@gmail.com> | 2015-11-06 13:24:37 -0500 |
| commit | a743631400ef997a4748b83bb363f84624cfeb10 (patch) | |
| tree | 144ec329cc9a322d87f7dda7c1bf36c8c646f9d5 /finale | |
| parent | 5af0600d0f6c139b4ea4d96ebe08fdc479805c0c (diff) | |
| download | cascades-a743631400ef997a4748b83bb363f84624cfeb10.tar.gz | |
adding some references for ERGMs
Diffstat (limited to 'finale')
| -rw-r--r-- | finale/mid_report.tex | 20 | ||||
| -rw-r--r-- | finale/sparse.bib | 44 |
2 files changed, 60 insertions, 4 deletions
diff --git a/finale/mid_report.tex b/finale/mid_report.tex index 3a808d5..38d9020 100644 --- a/finale/mid_report.tex +++ b/finale/mid_report.tex @@ -272,19 +272,31 @@ priors. We can: \item Take into account common graph structures, such as triangles, clustering \end{itemize} -A common prior for graph is the ERGM model~\cite{}, defined by feature vector -$s(G)$ and by the probability distribution: +A common prior for graph is the Exponential Random Graph Model (ERGM), which +allows flexible representations of networks and Bayesian inference. The +distribution of an ERGM family is defined by feature vector $s(G)$ and by the +probability distribution: $$P(G | \Theta) \propto \exp \left( s(G)\cdot \Theta \right)$$ +Though straightforward MCMC could be applied here, recent +work~\cite{caimo2011bayesian, koskinen2010analysing, robins2007recent} has shown +that ERGM inference has slow convergence and lack of robustness, developping +better alternatives to naive MCMC formulations. Experiments using such a prior +are ongoing, but we present only simple product distribution-type priors here. + \paragraph{Inference} We can sample from the posterior by MCMC\@. This might not be the fastest solution however. We could greatly benefit from using an alternative method: \begin{itemize} -\item EM\@. This approach was used in \cite{linderman2014discovering} to learn +\item EM\@. This approach was used in \cite{linderman2014discovering, +simma2012modeling} to learn the parameters of a Hawkes process, a closely related inference problem. \item Variational Inference. This approach was used in~\cite{linderman2015scalable} as an extension to the paper cited in the -previous bullet point. +previous bullet point. Considering the scalabilty of their approach, we hope to +apply their method to our problem here, due to the similarity of the two +processes, and to the computational constraints of running MCMC over a large +parameter space. \end{itemize} diff --git a/finale/sparse.bib b/finale/sparse.bib index 9fc56df..d2487c1 100644 --- a/finale/sparse.bib +++ b/finale/sparse.bib @@ -517,3 +517,47 @@ year = "2009" journal={arXiv preprint arXiv:1402.0914}, year={2014} } + + +@article{caimo2011bayesian, + title={Bayesian inference for exponential random graph models}, + author={Caimo, Alberto and Friel, Nial}, journal={Social Networks}, + volume={33}, + number={1}, + pages={41--55}, + year={2011}, + publisher={Elsevier} +} + +@article{koskinen2010analysing, + title={Analysing exponential random graph (p-star) models with missing data + using Bayesian data augmentation}, + author={Koskinen, Johan H and Robins, Garry L and Pattison, Philippa E}, + journal={Statistical Methodology}, + volume={7}, + number={3}, + pages={366--384}, + year={2010}, + publisher={Elsevier} +} + +@article{robins2007recent, + title={Recent developments in exponential random graph (p*) models for + social networks}, + author={Robins, Garry and Snijders, Tom and Wang, Peng and Handcock, Mark + and Pattison, Philippa}, + journal={Social networks}, + volume={29}, + number={2}, + pages={192--215}, + year={2007}, + publisher={Elsevier} +} + + +@article{simma2012modeling, + title={Modeling events with cascades of Poisson processes}, + author={Simma, Aleksandr and Jordan, Michael I}, + journal={arXiv preprint arXiv:1203.3516}, + year={2012} +} |
