Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation

📅 2024-10-30
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges in continual imitation learning (CiL)—including poor cross-task skill transfer, catastrophic forgetting of previously learned tasks, and low adaptation efficiency under environmental dynamics and scarce demonstration data—this paper proposes a prototype-memory-driven retrievable skill incremental learning framework. Our method constructs a shared skill prototype library across tasks via state-embedding space mapping, enabling retrieval-based skill reuse and incremental adapter updates. This overcomes the knowledge isolation limitation inherent in conventional adapter-based approaches and introduces, for the first time, controllable task “unlearning.” Evaluated on the Franka-Kitchen and Meta-World benchmarks, our framework significantly improves adaptation speed and sample efficiency for new tasks. Empirical results validate the feasibility of synergistically leveraging cross-task knowledge transfer and selective forgetting, demonstrating both enhanced generalization and adaptive plasticity in dynamic CiL settings.

Technology Category

Application Category

📝 Abstract
Continual Imitation Learning (CiL) involves extracting and accumulating task knowledge from demonstrations across multiple stages and tasks to achieve a multi-task policy. With recent advancements in foundation models, there has been a growing interest in adapter-based CiL approaches, where adapters are established parameter-efficiently for tasks newly demonstrated. While these approaches isolate parameters for specific tasks and tend to mitigate catastrophic forgetting, they limit knowledge sharing among different demonstrations. We introduce IsCiL, an adapter-based CiL framework that addresses this limitation of knowledge sharing by incrementally learning shareable skills from different demonstrations, thus enabling sample-efficient task adaptation using the skills particularly in non-stationary CiL environments. In IsCiL, demonstrations are mapped into the state embedding space, where proper skills can be retrieved upon input states through prototype-based memory. These retrievable skills are incrementally learned on their corresponding adapters. Our CiL experiments with complex tasks in Franka-Kitchen and Meta-World demonstrate robust performance of IsCiL in both task adaptation and sample-efficiency. We also show a simple extension of IsCiL for task unlearning scenarios.
Problem

Research questions and friction points this paper is trying to address.

Continuous Imitation Learning
Skill Transfer
Adaptation in Dynamic Environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

IsCiL
Adapter-based Continual Learning
Flexible Forgetting