GUIDES: Guidance Using Instructor-Distilled Embeddings for Pre-trained Robot Policy Enhancement

๐Ÿ“… 2025-11-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Pretrained robotic policies often lack semantic understanding, yet full architectural replacement is costly and risks eroding existing capabilities. To address this, we propose GUIDESโ€”a lightweight semantic augmentation framework that preserves the original policy architecture. GUIDES employs a fine-tuned vision-language model to generate contextual instructions and encode them into embeddings, which are injected into the policyโ€™s latent space to modulate behavior. It further introduces a teacher-reflection dual-module mechanism: the teacher module delivers real-time semantic guidance, while the reflection module leverages a large language model to analyze execution history and dynamically refine instructions. Integrated with inference-time confidence monitoring and context-aware self-correction, GUIDES significantly enhances robustness. Experiments in RoboCasa simulations demonstrate substantial improvements in task success rates across diverse policy architectures; validation on a physical UR5 platform confirms enhanced motion precision for subtasks such as grasping.

Technology Category

Application Category

๐Ÿ“ Abstract
Pre-trained robot policies serve as the foundation of many validated robotic systems, which encapsulate extensive embodied knowledge. However, they often lack the semantic awareness characteristic of foundation models, and replacing them entirely is impractical in many situations due to high costs and the loss of accumulated knowledge. To address this gap, we introduce GUIDES, a lightweight framework that augments pre-trained policies with semantic guidance from foundation models without requiring architectural redesign. GUIDES employs a fine-tuned vision-language model (Instructor) to generate contextual instructions, which are encoded by an auxiliary module into guidance embeddings. These embeddings are injected into the policy's latent space, allowing the legacy model to adapt to this new semantic input through brief, targeted fine-tuning. For inference-time robustness, a large language model-based Reflector monitors the Instructor's confidence and, when confidence is low, initiates a reasoning loop that analyzes execution history, retrieves relevant examples, and augments the VLM's context to refine subsequent actions. Extensive validation in the RoboCasa simulation environment across diverse policy architectures shows consistent and substantial improvements in task success rates. Real-world deployment on a UR5 robot further demonstrates that GUIDES enhances motion precision for critical sub-tasks such as grasping. Overall, GUIDES offers a practical and resource-efficient pathway to upgrade, rather than replace, validated robot policies.
Problem

Research questions and friction points this paper is trying to address.

Enhances pre-trained robot policies with semantic awareness from foundation models
Enables legacy policies to adapt to semantic input without architectural redesign
Improves task success rates and motion precision through contextual guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augments pre-trained policies with semantic guidance
Injects guidance embeddings into policy latent space
Uses LLM-based Reflector for inference-time robustness
๐Ÿ”Ž Similar Papers
No similar papers found.
M
Minquan Gao
University of California, Riverside
X
Xinyi Li
Johns Hopkins University
Qing Yan
Qing Yan
Research Scientist, Bytedance Inc
Generative modeldiffusion modelcomputer vision
X
Xiaojian Sun
Johns Hopkins University
X
Xiaopan Zhang
University of California, Riverside
Chien-Ming Huang
Chien-Ming Huang
Johns Hopkins University
Human-Robot InteractionHuman-Computer InteractionSocial Robotics
J
Jiachen Li
University of California, Riverside