🤖 AI Summary
This work addresses the challenge that existing rumor detection methods struggle to jointly model textual semantic coherence and dynamic propagation structures, leading to insufficient capture of complex rumor patterns. To overcome this limitation, the authors propose a plug-and-play framework that integrates large language models (LLMs) with graph neural networks (GNNs). The approach employs structured prompts to guide the LLM in evaluating information sub-chains and introduces virtual nodes to explicitly encode implicit semantic consistency as edge relationships in the graph. This design enhances the GNN’s ability to perceive cross-propagation-path semantic coherence while effectively mitigating LLM-induced biases. The framework is compatible with diverse GNN and LLM architectures and demonstrates significant performance improvements, with extensive experiments on multiple benchmarks confirming its effectiveness and scalability.
📝 Abstract
The rapid proliferation of rumors on social networks poses a significant threat to information integrity. While rumor dissemination forms complex structural patterns, existing detection methods often fail to capture the intricate interplay between textual coherence and propagation dynamics. Current approaches typically represent nodes through isolated textual embeddings, neglecting the semantic flow across the entire propagation path. To bridge this gap, we introduce a novel framework that integrates Large Language Models (LLMs) as a structural augmentation layer for graph-based rumor detection. Moving beyond conventional methods, our framework employs LLMs to evaluate information subchains and strategically introduce a virtual node into the graph. This structural modification converts latent semantic patterns into explicit topological features, effectively capturing the textual coherence that has historically been inaccessible to Graph Neural Networks (GNNs). To ensure reliability, we develop a structured prompt framework that mitigates inherent biases in LLMs while maintaining robust graph learning performance. Furthermore, our proposed framework is model-agnostic, meaning it is not constrained to any specific graph learning algorithm or LLMs. Its plug-and-play nature allows for seamless integration with further fine-tuned LLMs and graph techniques in the future, potentially enhancing predictive performance without the need to modify original algorithms.