Perspective Transition of Large Language Models for Solving Subjective Tasks

📅 2025-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit response bias in subjective tasks—such as opinion judgment and value assessment—due to rigid, fixed perspective assumptions. Method: This paper proposes RPT (Role-Adaptive Perspective Tuning), a context-learning-based dynamic perspective-switching method. RPT is the first framework to systematically model perspective dependency in subjective reasoning. It introduces a meta-prompt scheduling mechanism that adaptively selects among first-person (direct), role-playing, or third-person perspectives during inference—departing from conventional static prompting paradigms—without requiring model fine-tuning. Contribution/Results: RPT is compatible with mainstream LLMs including GPT-4 and Llama-3. Evaluated on 12 diverse subjective tasks, it significantly outperforms chain-of-thought and expert prompt baselines, achieving an average accuracy gain of 9.2%. Results demonstrate the generalizability of dynamic perspective adaptation for subjective reasoning and enhance LLMs’ fine-grained modeling of stance, contextual nuance, and value orientation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have revolutionized the field of natural language processing, enabling remarkable progress in various tasks. Different from objective tasks such as commonsense reasoning and arithmetic question-answering, the performance of LLMs on subjective tasks is still limited, where the perspective on the specific problem plays crucial roles for better interpreting the context and giving proper response. For example, in certain scenarios, LLMs may perform better when answering from an expert role perspective, potentially eliciting their relevant domain knowledge. In contrast, in some scenarios, LLMs may provide more accurate responses when answering from a third-person standpoint, enabling a more comprehensive understanding of the problem and potentially mitigating inherent biases. In this paper, we propose Reasoning through Perspective Transition (RPT), a method based on in-context learning that enables LLMs to dynamically select among direct, role, and third-person perspectives for the best way to solve corresponding subjective problem. Through extensive experiments on totally 12 subjective tasks by using both closed-source and open-source LLMs including GPT-4, GPT-3.5, Llama-3, and Qwen-2, our method outperforms widely used single fixed perspective based methods such as chain-of-thought prompting and expert prompting, highlights the intricate ways that LLMs can adapt their perspectives to provide nuanced and contextually appropriate responses for different problems.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Perspective Adaptation
Task-specific Viewpoints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Perspective Transformation
Language Model
Subjective Judgment Tasks
🔎 Similar Papers
No similar papers found.
X
Xiaolong Wang
Dept. of Comp. Sci. & Tech., Institute for AI, Tsinghua University, Beijing, China; Jiuquan Satellite Launch Center (JSLC), Gansu, China
Y
Yuan Zhang
Dept. of Comp. Sci. & Tech., Institute for AI, Tsinghua University, Beijing, China
Z
Ziyue Wang
Dept. of Comp. Sci. & Tech., Institute for AI, Tsinghua University, Beijing, China
Yuzhuang Xu
Yuzhuang Xu
Tsinghua University
Natural Language ProcessingEfficient AIMachine Learning
Fuwen Luo
Fuwen Luo
Tsinghua University
Computer Science
Yile Wang
Yile Wang
Shenzhen University
Natural Language Processing
P
Peng Li
Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China
Y
Yang Liu
Dept. of Comp. Sci. & Tech., Institute for AI, Tsinghua University, Beijing, China; Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China