🤖 AI Summary
In complex out-of-distribution (OOD) environments—such as cluttered or occluded scenes—global visual representations are highly susceptible to interference, leading to degraded imitation learning performance. To address this, we propose GLUE, a Global-Local Unified Encoding framework. GLUE introduces a text-guided key image patch selection and tracking mechanism, coupled with a global-feature-driven local information fusion architecture, enabling task-relevant feature alignment and contextual consistency preservation. It further enhances representation robustness via vision transformers, multi-scale feature fusion, and construction of a low-heterogeneity representation space. Experiments demonstrate that GLUE outperforms the strongest baseline by 17.6% in simulation and by 36.3% in real-world settings; moreover, its real-environment generalization capability improves by 58.3%.
📝 Abstract
In recent years, visual representation learning has gained widespread attention in robotic imitation learning. However, in complex Out-of-Distribution(OOD) settings characterized by clutter and occlusion, the attention of global visual representations can be diluted or interfered, leading to degraded policy performance. The invariance of local representations for task-relevant objects offers a solution. By efficiently utilizing these local representations, training and testing data can be mapped to a more similar feature space, thereby mitigating the covariate shift problem. Accordingly, we propose GLUE, a global-local unified encoding framework for imitation learning based on key-patch tracking. GLUE selects and tracks key-patches as critical local representations by employing a text-guided mechanism. It features a novel fusion framework where global patch features query local patches to distill essential information, yielding fine-grained local features with low heterogeneity relative to the global context. This fused representation steers the robot's visual attention toward task-relevant objects and preserves precise global context, which together align the training and testing distributions into a similar and task-informative feature space, ultimately enhancing the robustness of the imitation learning policy. Experiments demonstrate that GLUE achieves strong performance across diverse tasks in both simulation and real-world settings, outperforming the strongest baseline by 17.6% in simulation, 36.3% in real-world environments, and 58.3% on real-world generalization settings. The project website of GLUE is available at https://GLUE666.github.io/.