🤖 AI Summary
This work addresses the challenge that existing vision-language models (VLMs) struggle to reliably estimate object mass under real-world perceptual conditions. To overcome this limitation, the authors propose an end-to-end framework for object mass estimation that integrates multimodal visual cues—including multi-view RGB-D video, object detection, scale estimation, and cross-sectional image generation—to enhance the model’s understanding of object size and internal structure. The study also introduces VisPhysQuant, the first benchmark dataset featuring multi-view RGB-D videos paired with precise mass annotations. Experimental results demonstrate that the proposed method significantly improves mass estimation accuracy in real-world scenarios, validating the effectiveness of combining spatial reasoning with VLM-derived knowledge.
📝 Abstract
Vision-Language Models (VLMs) are increasingly applied to robotic perception and manipulation, yet their ability to infer physical properties required for manipulation remains limited. In particular, estimating the mass of real-world objects is essential for determining appropriate grasp force and ensuring safe interaction. However, current VLMs lack reliable mass reasoning capabilities, and most existing benchmarks do not explicitly evaluate physical quantity estimation under realistic sensing conditions. In this work, we propose PhysQuantAgent, a framework for real-world object mass estimation using VLMs, together with VisPhysQuant, a new benchmark dataset for evaluation. VisPhysQuant consists of RGB-D videos of real objects captured from multiple viewpoints, annotated with precise mass measurements. To improve estimation accuracy, we introduce three visual prompting methods that enhance the input image with object detection, scale estimation, and cross-sectional image generation to help the model comprehend the size and internal structure of the target object. Experiments show that visual prompting significantly improves mass estimation accuracy on real-world data, suggesting the efficacy of integrating spatial reasoning with VLM knowledge for physical inference.