Scene Graph-Guided Proactive Replanning for Failure-Resilient Embodied Agent

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autonomous robots frequently fail in dynamic environments due to unforeseen environmental changes; existing approaches rely either on reactive replanning after failure or manually engineered rules for proactive replanning, suffering from limited generalizability and robustness. This paper proposes a scene-graph-guided proactive replanning framework: at subtask boundaries, it constructs a real-time RGB-D–derived scene graph and matches it semantically and spatially against a reference scene graph extracted from expert demonstrations to automatically detect deviations; upon detection, a lightweight reasoning module is triggered preemptively—without handcrafted rules—to revise the plan. To our knowledge, this is the first work to integrate scene graph matching into proactive replanning, enabling semantic consistency verification prior to execution. Evaluated on AI2-THOR, the method significantly improves task success rates and fault tolerance.

Technology Category

Application Category

📝 Abstract
When humans perform everyday tasks, we naturally adjust our actions based on the current state of the environment. For instance, if we intend to put something into a drawer but notice it is closed, we open it first. However, many autonomous robots lack this adaptive awareness. They often follow pre-planned actions that may overlook subtle yet critical changes in the scene, which can result in actions being executed under outdated assumptions and eventual failure. While replanning is critical for robust autonomy, most existing methods respond only after failures occur, when recovery may be inefficient or infeasible. While proactive replanning holds promise for preventing failures in advance, current solutions often rely on manually designed rules and extensive supervision. In this work, we present a proactive replanning framework that detects and corrects failures at subtask boundaries by comparing scene graphs constructed from current RGB-D observations against reference graphs extracted from successful demonstrations. When the current scene fails to align with reference trajectories, a lightweight reasoning module is activated to diagnose the mismatch and adjust the plan. Experiments in the AI2-THOR simulator demonstrate that our approach detects semantic and spatial mismatches before execution failures occur, significantly improving task success and robustness.
Problem

Research questions and friction points this paper is trying to address.

Autonomous robots lack adaptive awareness for environmental changes
Existing replanning methods react only after failures occur
Current proactive solutions rely on manual rules and supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scene graph comparison for failure detection
Lightweight reasoning module for mismatch diagnosis
Proactive replanning at subtask boundaries
🔎 Similar Papers
No similar papers found.
C
Che Rin Yu
Korea University
Daewon Chae
Daewon Chae
University of Michigan
Generative ModelDeep Learning
D
Dabin Seo
Korea University
S
Sangwon Lee
KT (Korea Telecom) R&D Center
H
Hyeongwoo Im
KT (Korea Telecom) R&D Center
J
Jinkyu Kim
Korea University