The experimental results obtained in Section 5 look impressive and confirm the relevance of using a Bayesian approach for the Network Inference problem. However, we believe that many other aspects of Bayesian Inference could and should be exploited in the context of Network Inference. We wish to explore this in future work and only highlight a few possible directions here: \begin{itemize} \item obtain formal guarantees on the convergence of measure of the Bayesian posterior. Similarly to convergence rate results obtained with MLE estimation, we believe that convergence results could also be obtained in the Bayesian setting, at least in restricted settings or by making certain assumptions about the network being learned. \item strengthening the experimental results by systematically studying how different network properties impact the speedup induced by active learning. \item finish formally deriving the update equations when using Bohning approximations for Variational Inference. \item extend the combined Variational Inference and Bohning approximation to Hawkes processes to obtain a unified Bayesian framework for both discrete-time and continuous-time models. \item explore the impact of using more expressive (in particular non-factorized) on the speed of convergence, both in offline and active online learning. \end{itemize}