FailSafe: Reasoning and Recovery from Failures in Vision-Language-Action Models

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current Vision-Language-Action (VLA) models lack the capability to reason about and recover from unexpected operational failures: mainstream robotic datasets provide only successful trajectories, while failure-detection benchmarks are largely limited to non-executable textual explanations. To address this, we propose FailSafe—the first scalable framework for failure generation and recovery, compatible with arbitrary tasks and simulation environments. FailSafe automatically synthesizes diverse, realistic failure scenarios and generates executable recovery actions. We instantiate FailSafe-VLM using LLaVA-OneVision-7B, integrating visual, linguistic, and action modalities for end-to-end failure detection and recovery policy learning. Evaluated on ManiSkill, FailSafe significantly improves performance of state-of-the-art models—including pi0-FAST and OpenVLA—by an average of +22.6%. Moreover, it demonstrates strong generalization across varying spatial configurations, camera viewpoints, and robot morphologies.

Technology Category

Application Category

📝 Abstract
Recent advances in robotic manipulation have integrated low-level robotic control into Vision-Language Models (VLMs), extending them into Vision-Language-Action (VLA) models. Although state-of-the-art VLAs achieve strong performance in downstream robotic applications, supported by large-scale crowd-sourced robot training data, they still inevitably encounter failures during execution. Enabling robots to reason about and recover from unpredictable and abrupt failures remains a critical challenge. Existing robotic manipulation datasets, collected in either simulation or the real world, primarily provide only ground-truth trajectories, leaving robots unable to recover once failures occur. Moreover, the few datasets that address failure detection typically offer only textual explanations, which are difficult to utilize directly in VLA models. To address this gap, we introduce FailSafe, a novel failure generation and recovery system that automatically produces diverse failure cases paired with executable recovery actions. FailSafe can be seamlessly applied to any manipulation task in any simulator, enabling scalable creation of failure-action data. To demonstrate its effectiveness, we fine-tune LLaVa-OneVision-7B (LLaVa-OV-7B) to build FailSafe-VLM. Experimental results show that FailSafe-VLM successfully helps robotic arm detect and recover from potential failures, improving the performance of three state-of-the-art VLA models pi0-FAST, OpenVLA, OpenVLA-OFT) by up to 22.6% on average across several tasks in Maniskill. Furthermore, FailSafe-VLM could generalize across different spatial configurations, camera viewpoints, and robotic embodiments. We plan to release the FailSafe code to the community.
Problem

Research questions and friction points this paper is trying to address.

Enabling robots to reason about and recover from execution failures
Addressing the lack of failure recovery data in robotic manipulation datasets
Providing executable recovery actions for vision-language-action models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically generates failure cases with recovery actions
Seamlessly applies to any manipulation task in simulators
Fine-tunes VLM for failure detection and recovery improvement
🔎 Similar Papers
No similar papers found.