Do Vision-Language Models Measure Up? Benchmarking Visual Measurement Reading with MeasureBench

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) exhibit poor performance on visual measurement reading tasks—particularly in fine-grained spatial reasoning such as pointer localization and scale alignment—leading to substantial errors. Method: We introduce MeasureBench, the first dedicated benchmark for evaluating VLMs’ measurement reading capabilities, comprising diverse real-world and controllable synthetic instrument images. Its novel, scalable synthesis pipeline enables programmatic control over critical factors including pointer position, scale density, and illumination. Contribution/Results: We systematically evaluate leading closed- and open-source VLMs, uncovering pervasive spatial grounding deficiencies. Reinforcement learning–based optimization yields modest improvements on synthetic data but fails to generalize robustly to real images. Our findings indicate that state-of-the-art VLMs lack reliable visual measurement capability. MeasureBench establishes a standardized evaluation framework and foundational dataset to advance research in this underexplored domain.

Technology Category

Application Category

📝 Abstract
Reading measurement instruments is effortless for humans and requires relatively little domain expertise, yet it remains surprisingly challenging for current vision-language models (VLMs) as we find in preliminary evaluation. In this work, we introduce MeasureBench, a benchmark on visual measurement reading covering both real-world and synthesized images of various types of measurements, along with an extensible pipeline for data synthesis. Our pipeline procedurally generates a specified type of gauge with controllable visual appearance, enabling scalable variation in key details such as pointers, scales, fonts, lighting, and clutter. Evaluation on popular proprietary and open-weight VLMs shows that even the strongest frontier VLMs struggle measurement reading in general. A consistent failure mode is indicator localization: models can read digits or labels but misidentify the key positions of pointers or alignments, leading to big numeric errors despite plausible textual reasoning. We have also conducted preliminary experiments with reinforcement learning over synthetic data, and find encouraging results on in-domain synthetic subset but less promising for real-world images. Our analysis highlights a fundamental limitation of current VLMs in fine-grained spatial grounding. We hope this resource can help future advances on visually grounded numeracy and precise spatial perception of VLMs, bridging the gap between recognizing numbers and measuring the world.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking VLMs' ability to read measurements from visual instruments
Addressing failure in indicator localization causing significant numeric errors
Improving fine-grained spatial grounding and precise visual perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Procedurally generated gauges with controllable visual attributes
Extensible pipeline enabling scalable variation in key details
Reinforcement learning experiments on synthetic measurement data
🔎 Similar Papers
F
Fenfen Lin
Y
Yesheng Liu
H
Haiyu Xu
C
Chen Yue
Zheqi He
Zheqi He
Beijing Academy of Artificial Intelligence
Computer visionLLM
M
Mingxuan Zhao
M
Miguel Hu Chen
J
Jiakang Liu
J
JG Yao
X
Xi Yang