Can Large Language Models Address Open-Target Stance Detection?

📅 2024-08-30
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper introduces Open-Target Stance Detection (OTSD), a novel task that requires models to jointly identify targets mentioned in text and classify stance toward them (support/oppose/neutral) without relying on a predefined target list. Addressing the limitation of existing Target-Specific Extraction (TSE) methods—namely, their dependence on fixed, closed target sets—we formally define OTSD and propose a target quality evaluation metric that balances interpretability with strong correlation to human judgments. Leveraging large language models (LLMs) including GPT, Gemini, Llama, and Mistral, we employ prompt engineering and zero-shot inference for end-to-end target generation and stance classification. Experimental results demonstrate that LLMs significantly outperform TSE baselines across all OTSD subtasks; they achieve strong performance on explicit targets but face challenges with implicit ones. Our work establishes the first formal framework for open-target stance detection and provides principled metrics and scalable LLM-based solutions.

Technology Category

Application Category

📝 Abstract
Stance detection (SD) identifies the text position towards a target, typically labeled as favor, against, or none. We introduce Open-Target Stance Detection (OTSD), the most realistic task where targets are neither seen during training nor provided as input. We evaluate Large Language Models (LLMs) from GPT, Gemini, Llama, and Mistral families, comparing their performance to the only existing work, Target-Stance Extraction (TSE), which benefits from predefined targets. Unlike TSE, OTSD removes the dependency of a predefined list, making target generation and evaluation more challenging. We also provide a metric for evaluating target quality that correlates well with human judgment. Our experiments reveal that LLMs outperform TSE in target generation, both when the real target is explicitly and not explicitly mentioned in the text. Similarly, LLMs overall surpass TSE in stance detection for both explicit and non-explicit cases. However, LLMs struggle in both target generation and stance detection when the target is not explicit.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs on unseen targets in stance detection
Comparing LLM performance to Target-Stance Extraction method
Assessing LLM challenges with non-explicit target detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs outperform TSE in target generation
LLMs surpass TSE in stance detection
Metric for target quality aligns with human judgment
🔎 Similar Papers
No similar papers found.