DiffIM: Differentiable Influence Minimization with Surrogate Modeling and Continuous Relaxation

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Influence minimization (IMIN) in social networks—aiming to suppress misinformation propagation via structural interventions—faces bottlenecks with existing discrete, non-differentiable methods, including high computational cost, poor scalability, and incompatibility with deep learning. Method: We propose the first end-to-end differentiable IMIN framework, incorporating three acceleration mechanisms: (1) a GNN-based influence surrogate model replacing costly Monte Carlo simulations; (2) continuous relaxation of edge selection; and (3) a gradient-driven differentiable edge selection strategy. Contribution/Results: Our method provides theoretical guarantees of Pareto optimality. On real-world graphs, it achieves up to 15,160× faster inference over baselines while significantly outperforming all state-of-the-art approaches. This work establishes the first efficient, scalable, and high-performing differentiable framework for influence control.

Technology Category

Application Category

📝 Abstract
In social networks, people influence each other through social links, which can be represented as propagation among nodes in graphs. Influence minimization (IMIN) is the problem of manipulating the structures of an input graph (e.g., removing edges) to reduce the propagation among nodes. IMIN can represent time-critical real-world applications, such as rumor blocking, but IMIN is theoretically difficult and computationally expensive. Moreover, the discrete nature of IMIN hinders the usage of powerful machine learning techniques, which requires differentiable computation. In this work, we propose DiffIM, a novel method for IMIN with two differentiable schemes for acceleration: (1) surrogate modeling for efficient influence estimation, which avoids time-consuming simulations (e.g., Monte Carlo), and (2) the continuous relaxation of decisions, which avoids the evaluation of individual discrete decisions (e.g., removing an edge). We further propose a third accelerating scheme, gradient-driven selection, that chooses edges instantly based on gradients without optimization (spec., gradient descent iterations) on each test instance. Through extensive experiments on real-world graphs, we show that each proposed scheme significantly improves speed with little (or even no) IMIN performance degradation. Our method is Pareto-optimal (i.e., no baseline is faster and more effective than it) and typically several orders of magnitude (spec., up to 15,160X) faster than the most effective baseline while being more effective.
Problem

Research questions and friction points this paper is trying to address.

Influence Minimization
Social Networks
Machine Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

DiffIM Method
Impact Minimization
Efficiency Enhancement
🔎 Similar Papers
No similar papers found.