ManipBench: Benchmarking Vision-Language Models for Low-Level Robot Manipulation

๐Ÿ“… 2025-05-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current vision-language models (VLMs) lack standardized benchmarks for evaluating low-level robotic manipulation reasoningโ€”such as object interaction and deformable object handling. Method: We introduce ManipBench, the first dedicated benchmark that systematically defines and quantifies VLM capabilities at the fine-grained motion reasoning level. It features multi-dimensional task design and is rigorously validated through both simulation and real-robot experiments to ensure high correlation with actual manipulation performance. Contribution/Results: We conduct a unified evaluation across 33 representative VLMs spanning 10 major families and diverse parameter scales. Results reveal substantial cross-task performance variation; while VLM scores strongly correlate with real-world success rates, overall performance remains significantly below human levels. This work fills a critical gap in low-level manipulation capability assessment for VLMs and establishes a reproducible, extensible evaluation paradigm for future research.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision-Language Models (VLMs) have revolutionized artificial intelligence and robotics due to their commonsense reasoning capabilities. In robotic manipulation, VLMs are used primarily as high-level planners, but recent work has also studied their lower-level reasoning ability, which refers to making decisions about precise robot movements. However, the community currently lacks a clear and common benchmark that can evaluate how well VLMs can aid low-level reasoning in robotics. Consequently, we propose a novel benchmark, ManipBench, to evaluate the low-level robot manipulation reasoning capabilities of VLMs across various dimensions, including how well they understand object-object interactions and deformable object manipulation. We extensively test 33 representative VLMs across 10 model families on our benchmark, including variants to test different model sizes. Our evaluation shows that the performance of VLMs significantly varies across tasks, and there is a strong correlation between this performance and trends in our real-world manipulation tasks. It also shows that there remains a significant gap between these models and human-level understanding. See our website at: https://manipbench.github.io.
Problem

Research questions and friction points this paper is trying to address.

Lack of benchmark for VLMs in low-level robot manipulation
Need to evaluate VLM understanding of object interactions
Assess VLM performance gap compared to human-level reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes ManipBench for VLM low-level reasoning evaluation
Tests 33 VLMs across 10 model families
Analyzes performance correlation with real-world tasks
๐Ÿ”Ž Similar Papers
No similar papers found.