Understand the Implication: Learning to Think for Pragmatic Understanding

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle with pragmatic inference—reasoning about implicit meaning beyond literal semantics and label dependence. Method: We introduce ImpliedMeaningPreference, the first pragmatic preference dataset featuring explicit correct/incorrect reasoning chains, and propose a novel Chain-of-Thought (CoT)-integrated pragmatic preference learning paradigm. This approach uniquely incorporates explicit reasoning guidance into preference tuning. Contribution/Results: Our method significantly improves zero-shot generalization to unseen pragmatic tasks (e.g., presupposition, deixis). Experiments across multiple LLM families show a 11.12% absolute gain in pragmatic understanding accuracy and a 16.10% improvement in cross-task transfer performance. These results demonstrate the effectiveness and scalability of reasoning-driven training for modeling pragmatic competence.

Technology Category

Application Category

📝 Abstract
Pragmatics, the ability to infer meaning beyond literal interpretation, is crucial for social cognition and communication. While LLMs have been benchmarked for their pragmatic understanding, improving their performance remains underexplored. Existing methods rely on annotated labels but overlook the reasoning process humans naturally use to interpret implicit meaning. To bridge this gap, we introduce a novel pragmatic dataset, ImpliedMeaningPreference, that includes explicit reasoning (thoughts) for both correct and incorrect interpretations. Through preference-tuning and supervised fine-tuning, we demonstrate that thought-based learning significantly enhances LLMs' pragmatic understanding, improving accuracy by 11.12% across model families. We further discuss a transfer-learning study where we evaluate the performance of thought-based training for the other tasks of pragmatics (presupposition, deixis) that are not seen during the training time and observe an improvement of 16.10% compared to label-trained models.
Problem

Research questions and friction points this paper is trying to address.

Improving LLMs' pragmatic understanding beyond literal interpretation
Bridging the gap in reasoning processes for implicit meaning
Enhancing transfer learning for unseen pragmatic tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces ImpliedMeaningPreference dataset with reasoning
Uses thought-based learning for pragmatic understanding
Shows transfer-learning improvement for unseen tasks
🔎 Similar Papers
No similar papers found.
S
Settaluri Lakshmi Sravanthi
Indian Institute of Technology Bombay, Mumbai, India
K
Kishan Maharaj
Indian Institute of Technology Bombay, Mumbai, India
S
Sravani Gunnu
Indian Institute of Technology Bombay, Mumbai, India
Abhijit Mishra
Abhijit Mishra
Assistant Professor of Practice, iSchool, University of Texas at Austin
Machine LearningNatural Language ProcessingCognitive ScienceEye-Tracking
P
Pushpak Bhattacharyya
Indian Institute of Technology Bombay, Mumbai, India