MoMaStage: Skill-State Graph Guided Planning and Closed-Loop Execution for Long-Horizon Indoor Mobile Manipulation

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses task failure in long-horizon indoor mobile manipulation caused by cascading errors and insufficient environmental generalization. To this end, the authors propose a structured vision-language framework that eschews explicit scene mapping and instead leverages a topology-aware skill state graph coupled with a hierarchical skill library. This design enables logically consistent and topologically valid task decomposition and skill composition, while closed-loop execution and proprioceptive feedback facilitate semantic replanning. Experimental results demonstrate that the proposed approach significantly outperforms existing methods in both simulation and real-world environments, achieving substantially higher planning success and task completion rates while reducing the computational overhead associated with large model invocations.

Technology Category

Application Category

📝 Abstract
Indoor mobile manipulation (MoMA) enables robots to translate natural language instructions into physical actions, yet long-horizon execution remains challenging due to cascading errors and limited generalization across diverse environments. Learning-based approaches often fail to maintain logical consistency over extended horizons, while methods relying on explicit scene representations impose rigid structural assumptions that reduce adaptability in dynamic settings. To address these limitations, we propose MoMaStage, a structured vision-language framework for long-horizon MoMA that eliminates the need for explicit scene mapping. MoMaStage grounds a Vision-Language Model (VLM) within a Hierarchical Skill Library and a topology-aware Skill-State Graph, constraining task decomposition and skill composition within a feasible transition space. This structured grounding ensures that generated plans remain logically consistent and topologically valid with respect to the agent's evolving physical state. To enhance robustness, MoMaStage incorporates a closed-loop execution mechanism that monitors proprioceptive feedback and triggers graph-constrained semantic replanning when deviations are detected, maintaining alignment between planned skills and physical outcomes. Extensive experiments in physics-rich simulations and real-world environments demonstrate that MoMaStage outperforms state-of-the-art baselines, achieving substantially higher planning success, reducing token overhead, and significantly improving overall task success rates in long-horizon mobile manipulation. Video demonstrations are available on the project website: https://chenxuli-cxli.github.io/MoMaStage/.
Problem

Research questions and friction points this paper is trying to address.

long-horizon indoor mobile manipulation
cascading errors
limited generalization
logical consistency
dynamic environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Skill-State Graph
Vision-Language Model
Closed-Loop Execution
Long-Horizon Manipulation
Topological Planning
🔎 Similar Papers
No similar papers found.