🤖 AI Summary
This work addresses the challenge of catastrophic forgetting in continual imitation learning, where agents are prone to noise interference due to L2 matching in high-dimensional feature spaces, compromising the geometric structure of task representations. To mitigate this, the authors propose a geometry-preserving representation distillation framework that aligns policy representations to a low-rank subspace via singular value decomposition, thereby preserving the intrinsic manifold structure across multimodal tasks. Additionally, a confidence-guided KL divergence loss is introduced to perform knowledge distillation selectively on high-reliability action samples. By uniquely integrating subspace alignment with confidence-aware distillation, the method significantly enhances knowledge transfer and alleviates forgetting, achieving state-of-the-art performance on the LIBERO benchmark.
📝 Abstract
A key challenge in lifelong imitation learning (LIL) is enabling agents to acquire new skills from expert demonstrations while retaining prior knowledge. This requires preserving the low-dimensional manifolds and geometric structures that underlie task representations across sequential learning. Existing distillation methods, which rely on L2-norm feature matching in raw feature space, are sensitive to noise and high-dimensional variability, often failing to preserve intrinsic task manifolds. To address this, we introduce SPREAD, a geometry-preserving framework that employs singular value decomposition (SVD) to align policy representations across tasks within low-rank subspaces. This alignment maintains the underlying geometry of multimodal features, facilitating stable transfer, robustness, and generalization. Additionally, we propose a confidence-guided distillation strategy that applies a Kullback-Leibler divergence loss restricted to the top-M most confident action samples, emphasizing reliable modes and improving optimization stability. Experiments on the LIBERO, lifelong imitation learning benchmark, show that SPREAD substantially improves knowledge transfer, mitigates catastrophic forgetting, and achieves state-of-the-art performance.