Unveiling LLMs' Metaphorical Understanding: Exploring Conceptual Irrelevance, Context Leveraging and Syntactic Influence

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates the cognitive mechanisms and limitations of large language models (LLMs) in metaphor comprehension. Addressing three core problems—conceptual mapping misalignment, ambiguous representation of metaphorical versus literal knowledge, and syntactic sensitivity bias—we propose the first multidimensional evaluation framework integrating embedding-space projection, contrastive analysis of metaphor-literal word pairs, and syntactic structure perturbation. Experiments reveal that LLMs generate 15–25% conceptually irrelevant interpretations, over-relying on superficial statistical cues from training data (e.g., high-frequency metaphor collocations or syntactic markers) rather than context-driven semantic integration; moreover, their syntactic sensitivity markedly exceeds their structural understanding capacity. The study identifies, for the first time, three systematic cognitive biases in LLMs’ metaphor processing, empirically demonstrating their lack of genuine metaphorical reasoning ability. These findings provide both theoretical grounding and methodological support for future interpretable modeling and cognitive alignment efforts.

Technology Category

Application Category

📝 Abstract
Metaphor analysis is a complex linguistic phenomenon shaped by context and external factors. While Large Language Models (LLMs) demonstrate advanced capabilities in knowledge integration, contextual reasoning, and creative generation, their mechanisms for metaphor comprehension remain insufficiently explored. This study examines LLMs' metaphor-processing abilities from three perspectives: (1) Concept Mapping: using embedding space projections to evaluate how LLMs map concepts in target domains (e.g., misinterpreting "fall in love" as "drop down from love"); (2) Metaphor-Literal Repository: analyzing metaphorical words and their literal counterparts to identify inherent metaphorical knowledge; and (3) Syntactic Sensitivity: assessing how metaphorical syntactic structures influence LLMs' performance. Our findings reveal that LLMs generate 15%-25% conceptually irrelevant interpretations, depend on metaphorical indicators in training data rather than contextual cues, and are more sensitive to syntactic irregularities than to structural comprehension. These insights underline the limitations of LLMs in metaphor analysis and call for more robust computational approaches.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to map metaphorical concepts accurately
Analyzing metaphorical knowledge in LLMs' literal word repositories
Assessing syntactic influence on LLMs' metaphor comprehension performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using embedding space projections to map concepts
Analyzing metaphorical and literal word repositories
Assessing syntactic structure influence on performance
🔎 Similar Papers
No similar papers found.
F
Fengying Ye
NLP2CT Lab, Department of Computer and Information Science, University of Macau
S
Shanshan Wang
NLP2CT Lab, Department of Computer and Information Science, University of Macau
Lidia S. Chao
Lidia S. Chao
University of Macau
Derek F. Wong
Derek F. Wong
Professor, Department of Computer and Information Science, University of Macau
Machine TranslationNeural Machine TranslationNatural Language ProcessingMachine Learning