Understanding Counting Mechanisms in Large Language and Vision-Language Models

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the numerical representation and computation mechanisms underlying counting tasks in large language models (LLMs) and large vision-language models (LVLMs). To this end, we conduct controlled repetition experiments, causal mediation analysis, and activation patching—augmented by our custom tool CountScope. We discover, for the first time, that both LLMs and LVLMs internally instantiate transferable, implicit positional-encoding–like counters: numerical representations evolve hierarchically across network depth and dynamically distribute across spatial layouts in the visual modality; structural cues (e.g., delimiters) significantly modulate counting accuracy. These findings reveal a cross-modal, hierarchical, and structure-sensitive implicit counting mechanism. Crucially, this work establishes the first mechanistic interpretability framework for understanding foundational reasoning capabilities—specifically, discrete quantity reasoning—in foundation models.

Technology Category

Application Category

📝 Abstract
This paper examines how large language models (LLMs) and large vision-language models (LVLMs) represent and compute numerical information in counting tasks. We use controlled experiments with repeated textual and visual items and analyze model behavior through causal mediation and activation patching. To this end, we design a specialized tool, CountScope, for mechanistic interpretability of numerical content. Results show that individual tokens or visual features encode latent positional count information that can be extracted and transferred across contexts. Layerwise analyses reveal a progressive emergence of numerical representations, with lower layers encoding small counts and higher layers representing larger ones. We identify an internal counter mechanism that updates with each item, stored mainly in the final token or region and transferable between contexts. In LVLMs, numerical information also appears in visual embeddings, shifting between background and foreground regions depending on spatial composition. Models rely on structural cues such as separators in text, which act as shortcuts for tracking item counts and influence the accuracy of numerical predictions. Overall, counting emerges as a structured, layerwise process in LLMs and follows the same general pattern in LVLMs, shaped by the properties of the vision encoder.
Problem

Research questions and friction points this paper is trying to address.

Analyzing numerical representation in LLMs and LVLMs during counting tasks
Investigating internal counter mechanisms for tracking items across contexts
Examining structural cues' influence on numerical prediction accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

CountScope tool for mechanistic interpretability analysis
Internal counter mechanism updating with each item
Layerwise emergence of numerical representations in models
🔎 Similar Papers
No similar papers found.
Hosein Hasani
Hosein Hasani
Sharif University of Technology
Machine Learning
A
Amirmohammad Izadi
Department of Computer Engineering, Sharif University of Technology
F
Fatemeh Askari
Department of Computer Engineering, Sharif University of Technology
M
Mobin Bagherian
Department of Computer Engineering, Sharif University of Technology
S
Sadegh Mohammadian
Department of Computer Engineering, Sharif University of Technology
Mohammad Izadi
Mohammad Izadi
Department of Computer Engineering, Sharif University of Technology
Mahdieh Soleymani Baghshah
Mahdieh Soleymani Baghshah
Associate Professor, Computer Engineering Department, Sharif University of Technology
Deep LearningMachine Learning