Adversarial Style Augmentation via Large Language Model for Robust Fake News Detection

πŸ“… 2024-06-17
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
False news detection models exhibit insufficient robustness against stylistic adversarial attacks, particularly under dynamic, large language model (LLM)-driven perturbations, leading to significant performance degradation. To address this, we propose AdStyleβ€”a novel LLM-based adversarial style augmentation framework that proactively generates diverse, high-challenge stylistic transfer attack samples via prompt engineering for data augmentation and robust model training. Unlike template-based approaches, AdStyle requires no predefined attack patterns and generalizes effectively against unseen stylistic perturbations (e.g., tone shifting, syntactic rewriting). Evaluated on multiple benchmark datasets, AdStyle consistently improves detection accuracy and robustness, maintaining over 85% detection precision across various state-of-the-art stylistic attacks. It overcomes the limitations of conventional static-feature defenses, establishing a new paradigm for building trustworthy false news detection systems.

Technology Category

Application Category

πŸ“ Abstract
The spread of fake news harms individuals and presents a critical social challenge that must be addressed. Although numerous algorithmic and insightful features have been developed to detect fake news, many of these features can be manipulated with style-conversion attacks, especially with the emergence of advanced language models, making it more difficult to differentiate from genuine news. This study proposes adversarial style augmentation, AdStyle, designed to train a fake news detector that remains robust against various style-conversion attacks. The primary mechanism involves the strategic use of LLMs to automatically generate a diverse and coherent array of style-conversion attack prompts, enhancing the generation of particularly challenging prompts for the detector. Experiments indicate that our augmentation strategy significantly improves robustness and detection performance when evaluated on fake news benchmark datasets.
Problem

Research questions and friction points this paper is trying to address.

Enhancing fake news detection robustness against style-conversion attacks
Utilizing LLMs to generate diverse adversarial style prompts
Improving detector performance on fake news benchmark datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial style augmentation for robust detection
LLM-generated diverse style-conversion attack prompts
Enhancing detector robustness against style manipulation
πŸ”Ž Similar Papers
No similar papers found.