diff options
| -rw-r--r-- | paper/rebuttal.txt | 31 |
1 files changed, 29 insertions, 2 deletions
diff --git a/paper/rebuttal.txt b/paper/rebuttal.txt index 1d56b36..7e13878 100644 --- a/paper/rebuttal.txt +++ b/paper/rebuttal.txt @@ -6,8 +6,35 @@ in the independent cascade model, \theta_{i,j} < 0; in the voter model, regularization induced by constraints), which is not really clear whether is decomposable or not. " -This is a great point. In fact, the sign constraints are implicit since the -log-likelihood is undefined if these constraints are violated... + +This is a great point and we should have been more explicit about this. Overall +our results still hold. We need to distinguish between two types of +constraints: + +* the constraints of the type θ_{i,j} < 0, θ_{i,j} ≠ 0. These constraints are + already implicitly present in our optimization program: indeed, the + log-likelihood function is undefined (or equivalently can be extended to take + the value -∞) when these constraints are violated. + +* the constraint ∑_j θ_j = 1 for the voter model: + + - We first note that we don't have to enforce this constraint in the + optimization program (2): if we solve it without the constraint, the + guarantee on the l2 norm (Theorem 2) still applies. The only downside is + that the learned parameters might not sum up to one, which is something + we might need for applications (e.g. simulations). This is + application-dependent and somewhat out of the scope of our paper, but it + is easy to prove that if we normalize the learned parameters to sum up to + one after solving (2), the l2 guarantee of Theorem 2 looses + a multiplicative factor at most √s. + + - If we know from the beginning that we will need the learned parameters to + sum up to one, the constraint can be added to the optimization program. + By Lagrangian duality, there exists an augmented objective function (with + an additional linear term corresponding to the constraint) such that the + maximum of both optimization problems is the same and the solution of the + augmented program satisfies the constraint. Theorem 2 applies verbatim to + the augmented program and we obtain the same l2 guarantee. " In the independent cascade model, nodes have one chance to infect their |
