[De|Re]constructing VLMs' Reasoning in Counting

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) exhibit pervasive reasoning failures in counting tasks, struggling to accurately identify object quantities, categories, spatial arrangements, and co-occurring distractors. Through controlled ablation experiments and cross-layer representational analysis, we pinpoint the primary cause: misalignment in the final visual–language mapping layer. To address this, we propose a lightweight fine-tuning strategy that updates only the output projection layer—bypassing full-parameter adaptation—thereby significantly enhancing counting robustness. Our method achieves an average accuracy improvement of 21% across seven mainstream VLMs and demonstrates strong generalization on diverse real-world multi-source counting benchmarks. This work constitutes the first systematic localization of the representational root cause of VLM counting failures and introduces an efficient, scalable remediation paradigm. It provides both theoretical insight into VLM reasoning limitations and a practical, parameter-efficient pathway for enhancing compositional and quantitative reasoning capabilities in vision-language understanding.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have recently gained attention due to their competitive performance on multiple downstream tasks, achieved by following user-input instructions. However, VLMs still exhibit several limitations in visual reasoning, such as difficulties in identifying relations (e.g., spatial, temporal, and among objects), understanding temporal sequences (e.g., frames), and counting objects. In this work, we go beyond score-level benchmark evaluations of VLMs by investigating the underlying causes of their failures and proposing a targeted approach to improve their reasoning capabilities. We study the reasoning skills of seven state-of-the-art VLMs in the counting task under controlled experimental conditions. Our experiments show that VLMs are highly sensitive to the number and type of objects, their spatial arrangement, and the co-occurrence of distractors. A layer-wise analysis reveals that errors are due to incorrect mapping of the last-layer representation into the output space. Our targeted training shows that fine-tuning just the output layer improves accuracy by up to 21%. We corroborate these findings by achieving consistent improvements on real-world datasets.
Problem

Research questions and friction points this paper is trying to address.

Investigating failure causes in vision-language models' counting abilities
Analyzing sensitivity to object attributes and spatial arrangements
Improving counting accuracy through targeted output layer fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning only the output layer
Improving mapping from representation to output
Achieving accuracy gains up to 21%
🔎 Similar Papers
No similar papers found.
S
Simone Alghisi
SISLab and the Department of Information Engineering and Computer Science, University of Trento, 38123 Povo, Italy
Gabriel Roccabruna
Gabriel Roccabruna
PhD Student, SISLab, University of Trento
NLPDialogue SystemMachine LearningDeep Learning
M
Massimo Rizzoli
SISLab and the Department of Information Engineering and Computer Science, University of Trento, 38123 Povo, Italy
S
Seyed Mahed Mousavi
SISLab and the Department of Information Engineering and Computer Science, University of Trento, 38123 Povo, Italy
Giuseppe Riccardi
Giuseppe Riccardi
Professor of Computer Science, University of Trento Italy
Natural Language ProcessingSpeech ProcessingDialogueMachine Learning