LVLMs are Bad at Overhearing Human Referential Communication

📅 2025-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large vision-language models (LVLMs) struggle to comprehend dynamically generated and reused referring expressions in spontaneous, multi-turn collaborative dialogues, and fail to exhibit sustained performance improvement during passive listening. This work presents the first systematic evaluation of LVLMs as “dialogue listeners” — assessing their ability to accumulate and reuse referential knowledge across repeated visual grounding tasks. Leveraging a real-world spontaneous dialogue corpus, we empirically analyze seven state-of-the-art LVLMs on multi-turn visual reference resolution. Results reveal that existing LVLMs cannot effectively model the evolution of referring expressions in dynamic conversational contexts, exposing fundamental limitations in cross-turn semantic accumulation and reuse. Our study identifies a critical gap in LVLMs’ capacity for embodied interaction and collaborative understanding. To foster progress, we publicly release the dialogue corpus, evaluation code, and standardized benchmarking protocol — establishing a foundation for developing next-generation vision-language models with contextual memory capabilities.

Technology Category

Application Category

📝 Abstract
During spontaneous conversations, speakers collaborate on novel referring expressions, which they can then re-use in subsequent conversations. Understanding such referring expressions is an important ability for an embodied agent, so that it can carry out tasks in the real world. This requires integrating and understanding language, vision, and conversational interaction. We study the capabilities of seven state-of-the-art Large Vision Language Models (LVLMs) as overhearers to a corpus of spontaneous conversations between pairs of human discourse participants engaged in a collaborative object-matching task. We find that such a task remains challenging for current LVLMs and they all fail to show a consistent performance improvement as they overhear more conversations from the same discourse participants repeating the same task for multiple rounds. We release our corpus and code for reproducibility and to facilitate future research.
Problem

Research questions and friction points this paper is trying to address.

LVLMs struggle understanding human referential communication
Models fail improving performance with repeated exposure
Integrating vision language conversational interaction remains challenging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating LVLMs on overhearing human referential communication
Testing models on collaborative object-matching task corpus
Releasing corpus and code for reproducibility research
🔎 Similar Papers
No similar papers found.
Z
Zhengxiang Wang
Department of Linguistics, Institute for Advanced Computational Science
W
Weiling Li
Department of Psychology
P
Panagiotis Kaliosis
Department of Computer Science, Stony Brook University
Owen Rambow
Owen Rambow
Stony Brook University
Natural Language ProcessingComputational LinguisticsComputational Social Science
S
Susan E. Brennan
Department of Psychology