๐ค AI Summary
Current vision-language models (VLMs) lack standardized benchmarks for evaluating low-level robotic manipulation reasoningโsuch as object interaction and deformable object handling.
Method: We introduce ManipBench, the first dedicated benchmark that systematically defines and quantifies VLM capabilities at the fine-grained motion reasoning level. It features multi-dimensional task design and is rigorously validated through both simulation and real-robot experiments to ensure high correlation with actual manipulation performance.
Contribution/Results: We conduct a unified evaluation across 33 representative VLMs spanning 10 major families and diverse parameter scales. Results reveal substantial cross-task performance variation; while VLM scores strongly correlate with real-world success rates, overall performance remains significantly below human levels. This work fills a critical gap in low-level manipulation capability assessment for VLMs and establishes a reproducible, extensible evaluation paradigm for future research.
๐ Abstract
Vision-Language Models (VLMs) have revolutionized artificial intelligence and robotics due to their commonsense reasoning capabilities. In robotic manipulation, VLMs are used primarily as high-level planners, but recent work has also studied their lower-level reasoning ability, which refers to making decisions about precise robot movements. However, the community currently lacks a clear and common benchmark that can evaluate how well VLMs can aid low-level reasoning in robotics. Consequently, we propose a novel benchmark, ManipBench, to evaluate the low-level robot manipulation reasoning capabilities of VLMs across various dimensions, including how well they understand object-object interactions and deformable object manipulation. We extensively test 33 representative VLMs across 10 model families on our benchmark, including variants to test different model sizes. Our evaluation shows that the performance of VLMs significantly varies across tasks, and there is a strong correlation between this performance and trends in our real-world manipulation tasks. It also shows that there remains a significant gap between these models and human-level understanding. See our website at: https://manipbench.github.io.