π€ AI Summary
False news detection models exhibit insufficient robustness against stylistic adversarial attacks, particularly under dynamic, large language model (LLM)-driven perturbations, leading to significant performance degradation. To address this, we propose AdStyleβa novel LLM-based adversarial style augmentation framework that proactively generates diverse, high-challenge stylistic transfer attack samples via prompt engineering for data augmentation and robust model training. Unlike template-based approaches, AdStyle requires no predefined attack patterns and generalizes effectively against unseen stylistic perturbations (e.g., tone shifting, syntactic rewriting). Evaluated on multiple benchmark datasets, AdStyle consistently improves detection accuracy and robustness, maintaining over 85% detection precision across various state-of-the-art stylistic attacks. It overcomes the limitations of conventional static-feature defenses, establishing a new paradigm for building trustworthy false news detection systems.
π Abstract
The spread of fake news harms individuals and presents a critical social challenge that must be addressed. Although numerous algorithmic and insightful features have been developed to detect fake news, many of these features can be manipulated with style-conversion attacks, especially with the emergence of advanced language models, making it more difficult to differentiate from genuine news. This study proposes adversarial style augmentation, AdStyle, designed to train a fake news detector that remains robust against various style-conversion attacks. The primary mechanism involves the strategic use of LLMs to automatically generate a diverse and coherent array of style-conversion attack prompts, enhancing the generation of particularly challenging prompts for the detector. Experiments indicate that our augmentation strategy significantly improves robustness and detection performance when evaluated on fake news benchmark datasets.