đ¤ AI Summary
This study investigates whether multimodal large language models (MLLMs) possess foundational pragmatic competenceâspecifically, context-dependent color reference resolutionâwhen interpreting abstract visual stimuli (e.g., color patches, color grids). We introduce the first minimalist, high-pragmatic-sensitivity evaluation paradigm, featuring a standardized referential resolution protocol, controlled stimulus generation, multi-round human annotation validation, cross-model consistency analysis, and fine-grained error attribution. Experiments reveal that state-of-the-art MLLMs achieve sub-60% accuracy on this taskâdespite its triviality for humansâhighlighting critical deficits in inferring implicit speaker intent and modeling joint attention. Our work is the first to systematically expose this fundamental pragmatic gap in MLLMs, establishing a reproducible benchmark and diagnostic framework for semantic-pragmatic alignment research.
đ Abstract
We investigate the linguistic abilities of multimodal large language models in reference resolution tasks featuring simple yet abstract visual stimuli, such as color patches and color grids. Although the task may not seem challenging for today's language models, being straightforward for human dyads, we consider it to be a highly relevant probe of the pragmatic capabilities of MLLMs. Our results and analyses indeed suggest that basic pragmatic capabilities, such as context-dependent interpretation of color descriptions, still constitute major challenges for state-of-the-art MLLMs.