🤖 AI Summary
Existing embodied intelligence benchmarks predominantly measure execution success rates, neglecting systematic evaluation of cognitive capabilities, while suffering from low task fidelity and incomplete assessment dimensions. This paper introduces RoboBench—the first comprehensive cognitive evaluation benchmark designed specifically for multimodal large language models (MLLMs) serving as “embodied brains.” It systematically assesses five core cognitive dimensions: instruction understanding, perceptual reasoning, general-purpose planning, functional prediction, and failure analysis. We propose the novel MLLM-as-world-simulator evaluation framework, which validates behavioral plausibility by simulating physical state transitions, and construct a high-fidelity, multi-view, attribute-rich robot-oriented question-answering dataset. Extensive experiments across 14 state-of-the-art MLLMs reveal critical bottlenecks in implicit instruction comprehension, spatiotemporal reasoning, cross-scenario planning, fine-grained functional modeling, and fault diagnosis.
📝 Abstract
Building robots that can perceive, reason, and act in dynamic, unstructured environments remains a core challenge. Recent embodied systems often adopt a dual-system paradigm, where System 2 handles high-level reasoning while System 1 executes low-level control. In this work, we refer to System 2 as the embodied brain, emphasizing its role as the cognitive core for reasoning and decision-making in manipulation tasks. Given this role, systematic evaluation of the embodied brain is essential. Yet existing benchmarks emphasize execution success, or when targeting high-level reasoning, suffer from incomplete dimensions and limited task realism, offering only a partial picture of cognitive capability. To bridge this gap, we introduce RoboBench, a benchmark that systematically evaluates multimodal large language models (MLLMs) as embodied brains. Motivated by the critical roles across the full manipulation pipeline, RoboBench defines five dimensions-instruction comprehension, perception reasoning, generalized planning, affordance prediction, and failure analysis-spanning 14 capabilities, 25 tasks, and 6092 QA pairs. To ensure realism, we curate datasets across diverse embodiments, attribute-rich objects, and multi-view scenes, drawing from large-scale real robotic data. For planning, RoboBench introduces an evaluation framework, MLLM-as-world-simulator. It evaluate embodied feasibility by simulating whether predicted plans can achieve critical object-state changes. Experiments on 14 MLLMs reveal fundamental limitations: difficulties with implicit instruction comprehension, spatiotemporal reasoning, cross-scenario planning, fine-grained affordance understanding, and execution failure diagnosis. RoboBench provides a comprehensive scaffold to quantify high-level cognition, and guide the development of next-generation embodied MLLMs. The project page is in https://robo-bench.github.io.