Guardian: Detecting Robotic Planning and Execution Errors with Vision-Language Models

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robust failure detection in robotic manipulation remains limited due to the scarcity of high-quality, diverse failure data for vision-language models (VLMs). Method: We propose an automated failure synthesis framework that generates fine-grained failure samples and interpretable reasoning trajectories via programmable perturbations and multi-view VLM-driven analysis in both simulation and real-world settings. Our approach models failures as explainable semantic-action deviations and enables end-to-end failure detection and root-cause attribution. Contribution/Results: We construct three large-scale failure detection benchmarks and achieve state-of-the-art performance on both existing and newly introduced benchmarks. When integrated into a robot operating system, our method significantly improves task success rates. This work is the first to systematically address the data bottleneck hindering VLM-based failure understanding in robotics, empirically validating the effectiveness and scalability of synthetic-data-driven, fine-grained failure reasoning for real-world deployment.

Technology Category

Application Category

📝 Abstract
Robust robotic manipulation requires reliable failure detection and recovery. Although current Vision-Language Models (VLMs) show promise, their accuracy and generalization are limited by the scarcity of failure data. To address this data gap, we propose an automatic robot failure synthesis approach that procedurally perturbs successful trajectories to generate diverse planning and execution failures. This method produces not only binary classification labels but also fine-grained failure categories and step-by-step reasoning traces in both simulation and the real world. With it, we construct three new failure detection benchmarks: RLBench-Fail, BridgeDataV2-Fail, and UR5-Fail, substantially expanding the diversity and scale of existing failure datasets. We then train Guardian, a VLM with multi-view images for detailed failure reasoning and detection. Guardian achieves state-of-the-art performance on both existing and newly introduced benchmarks. It also effectively improves task success rates when integrated into a state-of-the-art manipulation system in simulation and real robots, demonstrating the impact of our generated failure data.
Problem

Research questions and friction points this paper is trying to address.

Detects robotic planning and execution errors using vision-language models
Addresses data scarcity by synthesizing diverse failure trajectories automatically
Improves task success rates through enhanced failure reasoning and detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatic robot failure synthesis from successful trajectories
Multi-view VLM for detailed failure reasoning and detection
New failure detection benchmarks expanding dataset diversity and scale
🔎 Similar Papers
No similar papers found.