🤖 AI Summary
Online stance detection in political discourse faces two key challenges: scarcity of authentic labeled data and the unreliability of large language models (LLMs) for online deployment—manifesting as output inconsistency, bias, and vulnerability to adversarial attacks. To address these, we propose a novel paradigm integrating LLM-generated synthetic data with uncertainty-driven active learning. Specifically, we offline generate high-quality synthetic samples using Mistral-7B, then distill and jointly train interpretable classifiers (e.g., RoBERTa) via synthetic–real data mixing. Concurrently, active learning substantially reduces annotation cost. To our knowledge, this is the first work to synergistically combine synthetic data generation and active learning for stance detection. Our approach preserves model interpretability and deployment safety while surpassing fully supervised baselines using only a small number of real labels—achieving a 5.2% F1 improvement on multi-topic political debate datasets, along with enhanced accuracy and robustness.
📝 Abstract
Stance detection holds great potential to improve online political discussions through its deployment in discussion platforms for purposes such as content moderation, topic summarization or to facilitate more balanced discussions. Typically, transformer-based models are employed directly for stance detection, requiring vast amounts of data. However, the wide variety of debate topics in online political discussions makes data collection particularly challenging. LLMs have revived stance detection, but their online deployment in online political discussions faces challenges like inconsistent outputs, biases, and vulnerability to adversarial attacks. We show how LLM-generated synthetic data can improve stance detection for online political discussions by using reliable traditional stance detection models for online deployment, while leveraging the text generation capabilities of LLMs for synthetic data generation in a secure offline environment. To achieve this, (i) we generate synthetic data for specific debate questions by prompting a Mistral-7B model and show that fine-tuning with the generated synthetic data can substantially improve the performance of stance detection, while remaining interpretable and aligned with real world data. (ii) Using the synthetic data as a reference, we can improve performance even further by identifying the most informative samples in an unlabelled dataset, i.e., those samples which the stance detection model is most uncertain about and can benefit from the most. By fine-tuning with both synthetic data and the most informative samples, we surpass the performance of the baseline model that is fine-tuned on all true labels, while labelling considerably less data.