๐ค AI Summary
Existing fake news detection methods suffer from poor cross-domain generalization due to domain bias in training data. Method: This paper proposes FNDCD, a causal-driven debiasing frameworkโthe first to incorporate causal analysis into fake news detection. FNDCD mitigates domain-specific confounding effects via confidence-based reweighting of classification losses and enforces propagation-structure regularization on graph neural networks (GNNs) to suppress spurious correlations between events and domains. The architecture integrates multi-layer perceptrons with GNNs for joint feature learning and graph-based reasoning. Contribution/Results: Evaluated on multiple real-world datasets with non-overlapping news domains, FNDCD achieves an average 8.7% improvement in F1-score over state-of-the-art methods. It significantly enhances robustness against unseen events and cross-domain fake news, demonstrating superior generalization capability under domain shift.
๐ Abstract
The widespread dissemination of fake news on social media poses significant risks, necessitating timely and accurate detection. However, existing methods struggle with unseen news due to their reliance on training data from past events and domains, leaving the challenge of detecting novel fake news largely unresolved. To address this, we identify biases in training data tied to specific domains and propose a debiasing solution FNDCD. Originating from causal analysis, FNDCD employs a reweighting strategy based on classification confidence and propagation structure regularization to reduce the influence of domain-specific biases, enhancing the detection of unseen fake news. Experiments on real-world datasets with non-overlapping news domains demonstrate FNDCD's effectiveness in improving generalization across domains.