REM: A Scalable Reinforced Multi-Expert Framework for Multiplex Influence Maximization

📅 2025-01-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak generalizability, poor scalability, and heavy reliance on high-quality labeled data in influence maximization for multilayer social networks, this paper proposes a reinforcement learning–based collaborative multi-expert framework. We innovatively formulate seed selection as a learnable policy, integrating graph neural networks with a Mixture-of-Experts (MoE) architecture to dynamically model heterogeneous diffusion processes across layers—enabling fully unsupervised, autonomous optimization of seed sets. Our method jointly leverages policy gradient reinforcement learning and multilayer diffusion modeling. Extensive experiments on multiple real-world datasets demonstrate significant improvements over state-of-the-art methods: up to 18.7% higher influence spread, 3.2× faster inference speed, and scalable support for networks with over one million nodes.

Technology Category

Application Category

📝 Abstract
In social online platforms, identifying influential seed users to maximize influence spread is a crucial as it can greatly diminish the cost and efforts required for information dissemination. While effective, traditional methods for Multiplex Influence Maximization (MIM) have reached their performance limits, prompting the emergence of learning-based approaches. These novel methods aim for better generalization and scalability for more sizable graphs but face significant challenges, such as (1) inability to handle unknown diffusion patterns and (2) reliance on high-quality training samples. To address these issues, we propose the Reinforced Expert Maximization framework (REM). REM leverages a Propagation Mixture of Experts technique to encode dynamic propagation of large multiplex networks effectively in order to generate enhanced influence propagation. Noticeably, REM treats a generative model as a policy to autonomously generate different seed sets and learn how to improve them from a Reinforcement Learning perspective. Extensive experiments on several real-world datasets demonstrate that REM surpasses state-of-the-art methods in terms of influence spread, scalability, and inference time in influence maximization tasks.
Problem

Research questions and friction points this paper is trying to address.

Influence Identification
Information Diffusion
Learning Sample Scarcity
Innovation

Methods, ideas, or system contributions that make the work stand out.

REM Framework
Hybrid Technique
Self-optimized Influencer Selection
🔎 Similar Papers
No similar papers found.
H
Huyen Nguyen
Posts and Telecommunications Institute of Technology, Hanoi, Vietnam
H
Hieu Dam
FPT University, Swinburne Vietnam Hanoi campus, Hanoi Vietnam
N
Nguyen Do
University of Florida, Gainesville, USA
Cong Tran
Cong Tran
PhD, Posts and Telecommunications Institute of Technology, Vietnam
Computer ScienceArtificial IntelligenceMachine LearningData Mining
C
Cuong Pham
Posts and Telecommunications Institute of Technology, Hanoi, Vietnam