🤖 AI Summary
This study investigates the efficacy of large language model (LLM)-driven controllable misinformation generation (CMG) for data augmentation in stance detection between COVID-19 tweets and associated claims. We fine-tune BERT-based classifiers and comparatively evaluate CMG against conventional augmentation techniques—including back-translation and synonym replacement—using both human annotation and automated metrics. Our systematic analysis reveals, for the first time, that built-in LLM safety mechanisms substantially constrain the generation of high-quality, stance-discriminative rumor samples, resulting in marginal and unstable performance gains from CMG; it fails to significantly outperform baselines across most evaluation metrics. This finding challenges the prevailing assumption that LLM-generated data inherently improves downstream task performance, offering critical empirical evidence to caution against uncritical adoption of LLM-based augmentation in misinformation detection. The code and annotated dataset are publicly released.
📝 Abstract
Misinformation surrounding emerging outbreaks poses a serious societal threat, making robust countermeasures essential. One promising approach is stance detection (SD), which identifies whether social media posts support or oppose misleading claims. In this work, we finetune classifiers on COVID-19 misinformation SD datasets consisting of claims and corresponding tweets. Specifically, we test controllable misinformation generation (CMG) using large language models (LLMs) as a method for data augmentation. While CMG demonstrates the potential for expanding training datasets, our experiments reveal that performance gains over traditional augmentation methods are often minimal and inconsistent, primarily due to built-in safeguards within LLMs. We release our code and datasets to facilitate further research on misinformation detection and generation.