The Impact of Data Characteristics on GNN Evaluation for Detecting Fake News

📅 2025-12-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing fake news detection benchmarks (e.g., GossipCop, PolitiFact) exhibit shallow, ego-like graph topologies with limited propagation diversity, resulting in negligible performance differences between graph neural networks (GNNs) and structure-agnostic multilayer perceptrons (MLPs)—differences ≤2% with overlapping confidence intervals—thus hindering rigorous evaluation of structural modeling capability. Method: Through systematic graph topology analysis, node feature shuffling, and edge randomization ablation experiments, we quantitatively demonstrate that structural information contributes negligibly to predictive performance. Contribution/Results: We identify a critical structural deficiency in mainstream datasets—insufficient propagation depth and heterogeneity—and argue for the construction of new benchmarks featuring richer, more realistic diffusion patterns. This work provides methodological reflection on evaluating GNNs’ true utility in fake news detection and outlines concrete directions for next-generation benchmark development.

Technology Category

Application Category

📝 Abstract
Graph neural networks (GNNs) are widely used for the detection of fake news by modeling the content and propagation structure of news articles on social media. We show that two of the most commonly used benchmark data sets - GossipCop and PolitiFact - are poorly suited to evaluating the utility of models that use propagation structure. Specifically, these data sets exhibit shallow, ego-like graph topologies that provide little or no ability to differentiate among modeling methods. We systematically benchmark five GNN architectures against a structure-agnostic multilayer perceptron (MLP) that uses the same node features. We show that MLPs match or closely trail the performance of GNNs, with performance gaps often within 1-2% and overlapping confidence intervals. To isolate the contribution of structure in these datasets, we conduct controlled experiments where node features are shuffled or edge structures randomized. We find that performance collapses under feature shuffling but remains stable under edge randomization. This suggests that structure plays a negligible role in these benchmarks. Structural analysis further reveals that over 75% of nodes are only one hop from the root, exhibiting minimal structural diversity. In contrast, on synthetic datasets where node features are noisy and structure is informative, GNNs significantly outperform MLPs. These findings provide strong evidence that widely used benchmarks do not meaningfully test the utility of modeling structural features, and they motivate the development of datasets with richer, more diverse graph topologies.
Problem

Research questions and friction points this paper is trying to address.

Evaluating GNNs' effectiveness in fake news detection using current benchmarks.
Assessing the role of graph structure versus node features in model performance.
Identifying limitations in benchmark datasets for structural modeling in GNNs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarking GNNs against MLPs on shallow graphs
Controlled experiments with shuffled features and randomized edges
Synthetic datasets with noisy features and informative structure
🔎 Similar Papers
No similar papers found.