π€ AI Summary
This work addresses catastrophic forgetting in imitation learning under limited memory and data settings for continuous task learning by proposing a lifelong imitation learning framework. The approach compactly stores and replays multimodal representations of visual observations, language instructions, and robot states in a shared latent space, while introducing an angular marginβbased incremental feature adaptation mechanism to preserve discriminability across tasks. Evaluated on the LIBERO benchmark, the method achieves a new state of the art, improving the area under the curve (AUC) by 10β17 percentage points and reducing forgetting rates by up to 65%. Ablation studies confirm the effectiveness of each proposed component.
π Abstract
We introduce a lifelong imitation learning framework that enables continual policy refinement across sequential tasks under realistic memory and data constraints. Our approach departs from conventional experience replay by operating entirely in a multimodal latent space, where compact representations of visual, linguistic, and robot's state information are stored and reused to support future learning. To further stabilize adaptation, we introduce an incremental feature adjustment mechanism that regularizes the evolution of task embeddings through an angular margin constraint, preserving inter-task distinctiveness. Our method establishes a new state of the art in the LIBERO benchmarks, achieving 10-17 point gains in AUC and up to 65% less forgetting compared to previous leading methods. Ablation studies confirm the effectiveness of each component, showing consistent gains over alternative strategies. The code is available at: https://github.com/yfqi/lifelong_mlr_ifa.