Exploring the Performance of Large Language Models on Subjective Span Identification Tasks

πŸ“… 2026-01-02
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses a critical gap in the literature by systematically evaluating the capacity of large language models (LLMs) to identify subjective text segmentsβ€”a capability not previously assessed in a comprehensive manner. The work presents the first thorough examination of LLM performance across three representative tasks: sentiment analysis, offensive language detection, and claim verification. Leveraging a combination of instruction tuning, in-context learning, and chain-of-thought reasoning, the experiments demonstrate that LLMs effectively model intra-textual semantic relationships, substantially improving accuracy in segment-level subjectivity recognition. The findings underscore the pivotal role of leveraging textual structural information for fine-grained subjective understanding and establish both a methodological framework and empirical foundation for future research in this domain.

Technology Category

Application Category

πŸ“ Abstract
Identifying relevant text spans is important for several downstream tasks in NLP, as it contributes to model explainability. While most span identification approaches rely on relatively smaller pre-trained language models like BERT, a few recent approaches have leveraged the latest generation of Large Language Models (LLMs) for the task. Current work has focused on explicit span identification like Named Entity Recognition (NER), while more subjective span identification with LLMs in tasks like Aspect-based Sentiment Analysis (ABSA) has been underexplored. In this paper, we fill this important gap by presenting an evaluation of the performance of various LLMs on text span identification in three popular tasks, namely sentiment analysis, offensive language identification, and claim verification. We explore several LLM strategies like instruction tuning, in-context learning, and chain of thought. Our results indicate underlying relationships within text aid LLMs in identifying precise text spans.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Subjective Span Identification
Aspect-based Sentiment Analysis
Offensive Language Identification
Claim Verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Subjective Span Identification
In-Context Learning
Instruction Tuning
Chain of Thought
πŸ”Ž Similar Papers
No similar papers found.