🤖 AI Summary
This work addresses the lack of accountability and reliability evaluation for multimodal large language models (MLLMs) in high-risk robotic manipulation. We introduce the first benchmark dedicated to responsible robotic manipulation, comprising 23 multi-stage tasks spanning electrical, chemical, and personal safety-critical scenarios, with validated sim-to-real transferability. Methodologically, we integrate MLLMs with visual perception, in-context learning, hazard detection, and multi-representation action execution within a reproducible framework, supported by a novel multimodal dataset. Our approach uniquely unifies risk-aware reasoning, moral inference, and physics-grounded planning, and introduces new evaluation metrics—including “safety success rate”—to quantify responsible behavior. Experimental results demonstrate that the benchmark effectively discriminates among agents across safety compliance, robustness to environmental perturbations, and generalization to unseen tasks and hazards.
📝 Abstract
Recent advances in large multimodal models have enabled new opportunities in embodied AI, particularly in robotic manipulation. These models have shown strong potential in generalization and reasoning, but achieving reliable and responsible robotic behavior in real-world settings remains an open challenge. In high-stakes environments, robotic agents must go beyond basic task execution to perform risk-aware reasoning, moral decision-making, and physically grounded planning. We introduce ResponsibleRobotBench, a systematic benchmark designed to evaluate and accelerate progress in responsible robotic manipulation from simulation to real world. This benchmark consists of 23 multi-stage tasks spanning diverse risk types, including electrical, chemical, and human-related hazards, and varying levels of physical and planning complexity. These tasks require agents to detect and mitigate risks, reason about safety, plan sequences of actions, and engage human assistance when necessary. Our benchmark includes a general-purpose evaluation framework that supports multimodal model-based agents with various action representation modalities. The framework integrates visual perception, context learning, prompt construction, hazard detection, reasoning and planning, and physical execution. It also provides a rich multimodal dataset, supports reproducible experiments, and includes standardized metrics such as success rate, safety rate, and safe success rate. Through extensive experimental setups, ResponsibleRobotBench enables analysis across risk categories, task types, and agent configurations. By emphasizing physical reliability, generalization, and safety in decision-making, this benchmark provides a foundation for advancing the development of trustworthy, real-world responsible dexterous robotic systems. https://sites.google.com/view/responsible-robotbench