๐ค AI Summary
Current fake news detection (FND) models exhibit insufficient robustness against adversarial commentsโeither crafted by malicious human users or generated by large language models (LLMs). To address this, we propose a group-adaptive adversarial training framework. Methodologically, we first establish a psychology-driven adversarial comment classification system grounded in perceptual, cognitive, and social dimensions. We then introduce a Dirichlet-distribution-based dynamic sampling mechanism to enable cross-category adaptive learning. Furthermore, we integrate LLM-generated diverse adversarial examples with an InfoDirichlet category-aware optimization strategy. Evaluated on multiple benchmark datasets, our model maintains high detection accuracy while significantly improving resilience against heterogeneous adversarial perturbations. Empirical results demonstrate superior robustness compared to state-of-the-art FND approaches.
๐ Abstract
The spread of fake news online distorts public judgment and erodes trust in social media platforms. Although recent fake news detection (FND) models perform well in standard settings, they remain vulnerable to adversarial comments-authored by real users or by large language models (LLMs)-that subtly shift model decisions. In view of this, we first present a comprehensive evaluation of comment attacks to existing fake news detectors and then introduce a group-adaptive adversarial training strategy to improve the robustness of FND models. To be specific, our approach comprises three steps: (1) dividing adversarial comments into three psychologically grounded categories: perceptual, cognitive, and societal; (2) generating diverse, category-specific attacks via LLMs to enhance adversarial training; and (3) applying a Dirichlet-based adaptive sampling mechanism (InfoDirichlet Adjusting Mechanism) that dynamically adjusts the learning focus across different comment categories during training. Experiments on benchmark datasets show that our method maintains strong detection accuracy while substantially increasing robustness to a wide range of adversarial comment perturbations.