Scale Can't Overcome Pragmatics: The Impact of Reporting Bias on Vision-Language Reasoning

📅 2026-02-26
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited reasoning capabilities of current vision-language models (VLMs), which stem from reporting bias in training data that systematically omits implicit information—such as spatial relations, temporal dynamics, negation, and counting. For the first time, the authors formally integrate pragmatic theories of reporting bias into vision-language learning, constructing a targeted evaluation benchmark to assess multiple models, including OpenCLIP, LLaVA-1.5, and Molmo. Their findings reveal that merely scaling up data volume or incorporating multilingual corpora fails to rectify these reasoning gaps. In contrast, explicitly designing and integrating annotations that surface such implicit information substantially enhances model performance. This work challenges the prevailing “scale-is-all-you-need” paradigm, underscoring the necessity of deliberately curating training data that supports robust multimodal reasoning.

Technology Category

Application Category

📝 Abstract
The lack of reasoning capabilities in Vision-Language Models (VLMs) has remained at the forefront of research discourse. We posit that this behavior stems from a reporting bias in their training data. That is, how people communicate about visual content by default omits tacit information needed to supervise some types of reasoning; e.g.,"at the game today!"is a more likely caption than"a photo of 37 people standing behind a field". We investigate the data underlying the popular VLMs OpenCLIP, LLaVA-1.5 and Molmo through the lens of theories from pragmatics, and find that reporting bias results in insufficient representation of four reasoning skills (spatial, temporal, negation, and counting), despite the corpora being of web-scale, and/or synthetically generated. With a set of curated benchmarks, we demonstrate that: (i) VLMs perform poorly on the aforementioned types of reasoning suppressed in the training data by reporting bias; (ii) contrary to popular belief, scaling data size, model size, and to multiple languages does not result in emergence of these skills by default; but, promisingly, (iii) incorporating annotations specifically collected to obtain tacit information is effective. Our findings highlight the need for more intentional training data curation methods, rather than counting on scale for emergence of reasoning capabilities.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
reasoning
reporting bias
pragmatics
training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

reporting bias
vision-language reasoning
pragmatics
data curation
reasoning emergence
🔎 Similar Papers