Are Multimodal Large Language Models Pragmatically Competent Listeners in Simple Reference Resolution Tasks?

📅 2025-06-13
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether multimodal large language models (MLLMs) possess foundational pragmatic competence—specifically, context-dependent color reference resolution—when interpreting abstract visual stimuli (e.g., color patches, color grids). We introduce the first minimalist, high-pragmatic-sensitivity evaluation paradigm, featuring a standardized referential resolution protocol, controlled stimulus generation, multi-round human annotation validation, cross-model consistency analysis, and fine-grained error attribution. Experiments reveal that state-of-the-art MLLMs achieve sub-60% accuracy on this task—despite its triviality for humans—highlighting critical deficits in inferring implicit speaker intent and modeling joint attention. Our work is the first to systematically expose this fundamental pragmatic gap in MLLMs, establishing a reproducible benchmark and diagnostic framework for semantic-pragmatic alignment research.

Technology Category

Application Category

📝 Abstract
We investigate the linguistic abilities of multimodal large language models in reference resolution tasks featuring simple yet abstract visual stimuli, such as color patches and color grids. Although the task may not seem challenging for today's language models, being straightforward for human dyads, we consider it to be a highly relevant probe of the pragmatic capabilities of MLLMs. Our results and analyses indeed suggest that basic pragmatic capabilities, such as context-dependent interpretation of color descriptions, still constitute major challenges for state-of-the-art MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Assessing MLLMs' ability in reference resolution tasks
Evaluating pragmatic competence with abstract visual stimuli
Identifying challenges in context-dependent color interpretation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal models analyze abstract visual stimuli
Test pragmatic skills in color reference tasks
Evaluate context-dependent color interpretation abilities
🔎 Similar Papers
No similar papers found.