🤖 AI Summary
Existing social bot detectors exhibit poor out-of-distribution robustness due to reliance on spurious correlations—i.e., shortcut learning—particularly vulnerable to misleading textual cues. This work identifies causal modeling deficiencies at the textual feature level as the root cause. We propose a large language model (LLM)-driven, three-tier counterfactual data augmentation framework: (1) individual text reconstruction, (2) distributional shift simulation, and (3) model-level causal regularization—collectively mitigating shortcut learning. By jointly optimizing counterfactual data generation and causal inference, our method significantly enhances generalization to unseen distributions. Experiments show that under typical shortcut-learning scenarios, baseline models suffer an average 32% accuracy drop, whereas our approach achieves a 56% average relative performance gain. To the best of our knowledge, this is the first work to deeply integrate LLM-powered counterfactual augmentation with multi-level causal learning, establishing a novel paradigm for robust social bot detection.
📝 Abstract
While existing social bot detectors perform well on benchmarks, their robustness across diverse real-world scenarios remains limited due to unclear ground truth and varied misleading cues. In particular, the impact of shortcut learning, where models rely on spurious correlations instead of capturing causal task-relevant features, has received limited attention. To address this gap, we conduct an in-depth study to assess how detectors are influenced by potential shortcuts based on textual features, which are most susceptible to manipulation by social bots. We design a series of shortcut scenarios by constructing spurious associations between user labels and superficial textual cues to evaluate model robustness. Results show that shifts in irrelevant feature distributions significantly degrade social bot detector performance, with an average relative accuracy drop of 32% in the baseline models. To tackle this challenge, we propose mitigation strategies based on large language models, leveraging counterfactual data augmentation. These methods mitigate the problem from data and model perspectives across three levels, including data distribution at both the individual user text and overall dataset levels, as well as the model's ability to extract causal information. Our strategies achieve an average relative performance improvement of 56% under shortcut scenarios.