🤖 AI Summary
This study addresses the challenge of stance detection in real-world social media, where targets are often undefined and dynamically evolving, rendering traditional methods ineffective. The work introduces, for the first time, an open-domain zero-shot stance detection task that leverages large language models (LLMs) to dynamically generate stance targets and adapt to multiple targets without requiring prior target knowledge. Key contributions include the construction of the first Chinese social media stance dataset with multidimensional evaluation metrics and the design of both integrated and two-stage fine-tuning frameworks. Experimental results demonstrate that the two-stage fine-tuned Qwen2.5-7B achieves a composite score of 66.99% in target identification, while the integrated fine-tuned DeepSeek-R1-Distill-Qwen-7B attains an F1 score of 79.26% in stance detection.
📝 Abstract
Current stance detection research typically relies on predicting stance based on given targets and text. However, in real-world social media scenarios, targets are neither predefined nor static but rather complex and dynamic. To address this challenge, we propose a novel task: zero-shot stance detection in the wild with Dynamic Target Generation and Multi-Target Adaptation (DGTA), which aims to automatically identify multiple target-stance pairs from text without prior target knowledge. We construct a Chinese social media stance detection dataset and design multi-dimensional evaluation metrics. We explore both integrated and two-stage fine-tuning strategies for large language models (LLMs) and evaluate various baseline models. Experimental results demonstrate that fine-tuned LLMs achieve superior performance on this task: the two-stage fine-tuned Qwen2.5-7B attains the highest comprehensive target recognition score of 66.99%, while the integrated fine-tuned DeepSeek-R1-Distill-Qwen-7B achieves a stance detection F1 score of 79.26%.