🤖 AI Summary
In open-world robotic manipulation, existing approaches suffer from a dual challenge: the disjunction between procedural and declarative skills, hindering simultaneous high-level cognitive reasoning and low-level action execution. To address this, we propose RoBridge—a unified framework that integrates visual-language models (VLMs) for semantic understanding and hierarchical task decomposition, introduces symbolic “Invariant Operable Representations” (IORs) as a semantic-to-action bridge, and couples a reinforcement-learning-based generalist embodied agent for procedural skill transfer. RoBridge synergistically combines hierarchical policy decoupling, symbolic representation learning, and sim-to-real adaptation. Empirically, with only five real-world demonstrations per task, it achieves 75% success rate on unseen tasks and an average cross-simulation-to-reality generalization rate of 83%.
📝 Abstract
Operating robots in open-ended scenarios with diverse tasks is a crucial research and application direction in robotics. While recent progress in natural language processing and large multimodal models has enhanced robots' ability to understand complex instructions, robot manipulation still faces the procedural skill dilemma and the declarative skill dilemma in open environments. Existing methods often compromise cognitive and executive capabilities. To address these challenges, in this paper, we propose RoBridge, a hierarchical intelligent architecture for general robotic manipulation. It consists of a high-level cognitive planner (HCP) based on a large-scale pre-trained vision-language model (VLM), an invariant operable representation (IOR) serving as a symbolic bridge, and a generalist embodied agent (GEA). RoBridge maintains the declarative skill of VLM and unleashes the procedural skill of reinforcement learning, effectively bridging the gap between cognition and execution. RoBridge demonstrates significant performance improvements over existing baselines, achieving a 75% success rate on new tasks and an 83% average success rate in sim-to-real generalization using only five real-world data samples per task. This work represents a significant step towards integrating cognitive reasoning with physical execution in robotic systems, offering a new paradigm for general robotic manipulation.