Metaphor and Large Language Models: When Surface Features Matter More than Deep Understanding

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper systematically evaluates the genuine metaphor comprehension capabilities of large language models (LLMs), investigating whether their performance on metaphor interpretation tasks stems from deep semantic understanding or reliance on superficial cues—such as lexical overlap, sentence length, or syntactic patterns—in natural language inference (NLI) and question answering (QA). Method: Leveraging multiple public metaphor datasets, the authors design diverse prompting strategies and controlled ablation experiments to quantify correlations between model performance and surface-level linguistic features. Contribution/Results: Results demonstrate that LLMs’ metaphor interpretation is predominantly driven by surface features, in-context learning, and pretraining-induced linguistic priors—not by robust semantic reasoning. Consequently, standard evaluation protocols yield overly optimistic estimates of metaphor understanding. The paper proposes a more rigorous, bias-mitigated evaluation paradigm for metaphor comprehension and open-sources all data, code, and analysis frameworks to enable reproducible research and establish a new benchmark for trustworthy metaphor modeling.

Technology Category

Application Category

📝 Abstract
This paper presents a comprehensive evaluation of the capabilities of Large Language Models (LLMs) in metaphor interpretation across multiple datasets, tasks, and prompt configurations. Although metaphor processing has gained significant attention in Natural Language Processing (NLP), previous research has been limited to single-dataset evaluations and specific task settings, often using artificially constructed data through lexical replacement. We address these limitations by conducting extensive experiments using diverse publicly available datasets with inference and metaphor annotations, focusing on Natural Language Inference (NLI) and Question Answering (QA) tasks. The results indicate that LLMs' performance is more influenced by features like lexical overlap and sentence length than by metaphorical content, demonstrating that any alleged emergent abilities of LLMs to understand metaphorical language are the result of a combination of surface-level features, in-context learning, and linguistic knowledge. This work provides critical insights into the current capabilities and limitations of LLMs in processing figurative language, highlighting the need for more realistic evaluation frameworks in metaphor interpretation tasks. Data and code are publicly available.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' metaphor interpretation across diverse datasets and tasks
Assessing impact of surface features versus deep understanding in metaphors
Identifying limitations in current metaphor processing evaluation frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates LLMs across multiple metaphor datasets
Uses diverse datasets with NLI and QA tasks
Identifies surface features as key performance factors
🔎 Similar Papers
No similar papers found.