Large Language Models for Subjective Language Understanding: A Survey

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Subjective language understanding—encompassing sentiment analysis, emotion recognition, sarcasm detection, humor comprehension, stance detection, metaphor interpretation, intent detection, and aesthetic assessment—faces inherent challenges including ambiguity, strong contextual dependency, and severe annotation scarcity. Method: We propose the first taxonomy for subjective language understanding tailored to large language models (LLMs), grounded in cognitive linguistics; develop a unified modeling framework integrating architectural analysis, cross-task comparative evaluation, and transfer learning experiments. Contribution/Results: Our taxonomy reveals shared cognitive mechanisms and modeling commonalities across tasks. Empirical validation confirms the framework’s effectiveness in enhancing generalization and interpretability. We systematically survey benchmark datasets and state-of-the-art methods, identify critical open issues—including model biases, data skewness, and ethical risks—and provide theoretical foundations and practical guidelines for developing explainable, robust, and human-centered subjective language processing systems.

Technology Category

Application Category

📝 Abstract
Subjective language understanding refers to a broad set of natural language processing tasks where the goal is to interpret or generate content that conveys personal feelings, opinions, or figurative meanings rather than objective facts. With the advent of large language models (LLMs) such as ChatGPT, LLaMA, and others, there has been a paradigm shift in how we approach these inherently nuanced tasks. In this survey, we provide a comprehensive review of recent advances in applying LLMs to subjective language tasks, including sentiment analysis, emotion recognition, sarcasm detection, humor understanding, stance detection, metaphor interpretation, intent detection, and aesthetics assessment. We begin by clarifying the definition of subjective language from linguistic and cognitive perspectives, and we outline the unique challenges posed by subjective language (e.g. ambiguity, figurativeness, context dependence). We then survey the evolution of LLM architectures and techniques that particularly benefit subjectivity tasks, highlighting why LLMs are well-suited to model subtle human-like judgments. For each of the eight tasks, we summarize task definitions, key datasets, state-of-the-art LLM-based methods, and remaining challenges. We provide comparative insights, discussing commonalities and differences among tasks and how multi-task LLM approaches might yield unified models of subjectivity. Finally, we identify open issues such as data limitations, model bias, and ethical considerations, and suggest future research directions. We hope this survey will serve as a valuable resource for researchers and practitioners interested in the intersection of affective computing, figurative language processing, and large-scale language models.
Problem

Research questions and friction points this paper is trying to address.

Surveying LLMs' role in subjective language tasks like sentiment analysis.
Addressing challenges in ambiguity and context dependence of subjective language.
Exploring ethical issues and future directions in LLM-based subjectivity modeling.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs for nuanced subjective language tasks
Multi-task LLM approaches for unified models
Addressing ambiguity with advanced LLM architectures
🔎 Similar Papers
No similar papers found.
Changhao Song
Changhao Song
Tianjin University
natural language processingrecommendation systemLarge Language Model
Yazhou Zhang
Yazhou Zhang
Associate Professor, Tianjin University
Sentiment AnalysisQuantum CognitionSarcasm DetectionHumor Analysis
H
Hui Gao
College of Intelligence and Computing, Tianjin University
B
Ben Yao
School of Nursing, The Hong Kong Polytechnic University
P
Peng Zhang
College of Intelligence and Computing, Tianjin University