Do Vision-Language Models Measure Up? Benchmarking Visual Measurement Reading with MeasureBench
Abstract
MeasureBench evaluates vision-language models on reading measurements from images, revealing challenges in indicator localization and fine-grained spatial grounding.
Reading measurement instruments is effortless for humans and requires relatively little domain expertise, yet it remains surprisingly challenging for current vision-language models (VLMs) as we find in preliminary evaluation. In this work, we introduce MeasureBench, a benchmark on visual measurement reading covering both real-world and synthesized images of various types of measurements, along with an extensible pipeline for data synthesis. Our pipeline procedurally generates a specified type of gauge with controllable visual appearance, enabling scalable variation in key details such as pointers, scales, fonts, lighting, and clutter. Evaluation on popular proprietary and open-weight VLMs shows that even the strongest frontier VLMs struggle measurement reading in general. A consistent failure mode is indicator localization: models can read digits or labels but misidentify the key positions of pointers or alignments, leading to big numeric errors despite plausible textual reasoning. We have also conducted preliminary experiments with reinforcement learning over synthetic data, and find encouraging results on in-domain synthetic subset but less promising for real-world images. Our analysis highlights a fundamental limitation of current VLMs in fine-grained spatial grounding. We hope this resource can help future advances on visually grounded numeracy and precise spatial perception of VLMs, bridging the gap between recognizing numbers and measuring the world.
Community
Vision-Language Models (VLMs) can already tackle many complex tasks, including some college-level exam questions. Frontier VLMs also show promising spatial reasoning ability and are being explored in domains such as autonomous driving and embodied intelligence. However, many studies point out that VLMs still struggle with fine-grained visual perception, with some even claiming that “VLMs are blind.”
In this work, we seek a quantitative way to evaluate fine-grained visual perception with reasoning in VLMs. We focus on instrument-reading scenarios—such as reading clocks, ammeters, and measuring cylinders—which are realistic, meaningful, and surprisingly challenging for current models. To this end, we propose MeasureBench and a synthetic pipeline for generating measuring-instrument data, hoping to drive progress in fine-grained visual perception for VLMs.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper