🤖 AI Summary
This study investigates the numerical representation and computation mechanisms underlying counting tasks in large language models (LLMs) and large vision-language models (LVLMs). To this end, we conduct controlled repetition experiments, causal mediation analysis, and activation patching—augmented by our custom tool CountScope. We discover, for the first time, that both LLMs and LVLMs internally instantiate transferable, implicit positional-encoding–like counters: numerical representations evolve hierarchically across network depth and dynamically distribute across spatial layouts in the visual modality; structural cues (e.g., delimiters) significantly modulate counting accuracy. These findings reveal a cross-modal, hierarchical, and structure-sensitive implicit counting mechanism. Crucially, this work establishes the first mechanistic interpretability framework for understanding foundational reasoning capabilities—specifically, discrete quantity reasoning—in foundation models.
📝 Abstract
This paper examines how large language models (LLMs) and large vision-language models (LVLMs) represent and compute numerical information in counting tasks. We use controlled experiments with repeated textual and visual items and analyze model behavior through causal mediation and activation patching. To this end, we design a specialized tool, CountScope, for mechanistic interpretability of numerical content. Results show that individual tokens or visual features encode latent positional count information that can be extracted and transferred across contexts. Layerwise analyses reveal a progressive emergence of numerical representations, with lower layers encoding small counts and higher layers representing larger ones. We identify an internal counter mechanism that updates with each item, stored mainly in the final token or region and transferable between contexts. In LVLMs, numerical information also appears in visual embeddings, shifting between background and foreground regions depending on spatial composition. Models rely on structural cues such as separators in text, which act as shortcuts for tracking item counts and influence the accuracy of numerical predictions. Overall, counting emerges as a structured, layerwise process in LLMs and follows the same general pattern in LVLMs, shaped by the properties of the vision encoder.