๐ค AI Summary
Pretrained robotic policies often lack semantic understanding, yet full architectural replacement is costly and risks eroding existing capabilities. To address this, we propose GUIDESโa lightweight semantic augmentation framework that preserves the original policy architecture. GUIDES employs a fine-tuned vision-language model to generate contextual instructions and encode them into embeddings, which are injected into the policyโs latent space to modulate behavior. It further introduces a teacher-reflection dual-module mechanism: the teacher module delivers real-time semantic guidance, while the reflection module leverages a large language model to analyze execution history and dynamically refine instructions. Integrated with inference-time confidence monitoring and context-aware self-correction, GUIDES significantly enhances robustness. Experiments in RoboCasa simulations demonstrate substantial improvements in task success rates across diverse policy architectures; validation on a physical UR5 platform confirms enhanced motion precision for subtasks such as grasping.
๐ Abstract
Pre-trained robot policies serve as the foundation of many validated robotic systems, which encapsulate extensive embodied knowledge. However, they often lack the semantic awareness characteristic of foundation models, and replacing them entirely is impractical in many situations due to high costs and the loss of accumulated knowledge. To address this gap, we introduce GUIDES, a lightweight framework that augments pre-trained policies with semantic guidance from foundation models without requiring architectural redesign. GUIDES employs a fine-tuned vision-language model (Instructor) to generate contextual instructions, which are encoded by an auxiliary module into guidance embeddings. These embeddings are injected into the policy's latent space, allowing the legacy model to adapt to this new semantic input through brief, targeted fine-tuning. For inference-time robustness, a large language model-based Reflector monitors the Instructor's confidence and, when confidence is low, initiates a reasoning loop that analyzes execution history, retrieves relevant examples, and augments the VLM's context to refine subsequent actions. Extensive validation in the RoboCasa simulation environment across diverse policy architectures shows consistent and substantial improvements in task success rates. Real-world deployment on a UR5 robot further demonstrates that GUIDES enhances motion precision for critical sub-tasks such as grasping. Overall, GUIDES offers a practical and resource-efficient pathway to upgrade, rather than replace, validated robot policies.