Pragmatic Theories Enhance Understanding of Implied Meanings in LLMs

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit significant limitations in understanding implicit meaning. To address this, we propose a pragmatics-enhanced prompting method that integrates classical pragmatic frameworks—such as Grice’s Cooperative Principle and Relevance Theory—into in-context learning and chain-of-thought reasoning via lightweight prompts (e.g., theory names or concise definitions), thereby guiding multi-step semantic inference. Unlike computationally intensive fine-tuning or structured knowledge injection, our approach achieves interpretable, theory-guided reasoning solely through prompt engineering. Experiments across multiple implicit meaning understanding benchmarks demonstrate that our method improves LLM accuracy by up to 9.6%. Notably, even minimal prompts—using only theory names—yield consistent performance gains of 1–3%, underscoring the efficacy and practicality of pragmatic theories as cognitive scaffolds for implicit reasoning.

Technology Category

Application Category

📝 Abstract
The ability to accurately interpret implied meanings plays a crucial role in human communication and language use, and language models are also expected to possess this capability. This study demonstrates that providing language models with pragmatic theories as prompts is an effective in-context learning approach for tasks to understand implied meanings. Specifically, we propose an approach in which an overview of pragmatic theories, such as Gricean pragmatics and Relevance Theory, is presented as a prompt to the language model, guiding it through a step-by-step reasoning process to derive a final interpretation. Experimental results showed that, compared to the baseline, which prompts intermediate reasoning without presenting pragmatic theories (0-shot Chain-of-Thought), our methods enabled language models to achieve up to 9.6% higher scores on pragmatic reasoning tasks. Furthermore, we show that even without explaining the details of pragmatic theories, merely mentioning their names in the prompt leads to a certain performance improvement (around 1-3%) in larger models compared to the baseline.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs' interpretation of implied meanings
Using pragmatic theories as prompts for reasoning
Improving performance on pragmatic reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using pragmatic theories as prompts for LLMs
Step-by-step reasoning guided by Gricean pragmatics
Mentioning theory names improves model performance
🔎 Similar Papers
No similar papers found.
T
Takuma Sato
Nara Institute of Science and Technology, Nara, Japan
S
Seiya Kawano
Kyoto Institute of Technology
Koichiro Yoshino
Koichiro Yoshino
Tokyo Institute of Technology / GRP, RIKEN
spoken dialogue systemsnatural language processingspoken language processinghuman robot