Efficient Continual Adaptation of Pretrained Robotic Policy with Online Meta-Learned Adapters

πŸ“… 2025-03-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the limited knowledge transfer of pretrained robotic policies during continual adaptation to novel tasks in dynamic home environments, this paper proposes the Online Meta-Learning Adapter (OMLA). OMLA is the first approach to embed meta-learning objectives directly into the online gradient updates of a lightweight, parameter-efficient fine-tuning (PEFT) adapter, enabling implicit cross-task knowledge reuse. Its plug-and-play architecture requires neither task identifiers nor historical data replay, supporting single-pass online adaptation in real-world settings. Experiments on both simulated and physical robot platforms demonstrate that OMLA achieves an average 23.6% improvement in task adaptation success rate over state-of-the-art baselines, while significantly accelerating convergence and enhancing final performance. This work establishes an efficient, scalable paradigm for continual autonomous learning in domestic service robotics.

Technology Category

Application Category

πŸ“ Abstract
Continual adaptation is essential for general autonomous agents. For example, a household robot pretrained with a repertoire of skills must still adapt to unseen tasks specific to each household. Motivated by this, building upon parameter-efficient fine-tuning in language models, prior works have explored lightweight adapters to adapt pretrained policies, which can preserve learned features from the pretraining phase and demonstrate good adaptation performances. However, these approaches treat task learning separately, limiting knowledge transfer between tasks. In this paper, we propose Online Meta-Learned adapters (OMLA). Instead of applying adapters directly, OMLA can facilitate knowledge transfer from previously learned tasks to current learning tasks through a novel meta-learning objective. Extensive experiments in both simulated and real-world environments demonstrate that OMLA can lead to better adaptation performances compared to the baseline methods. The project link: https://ricky-zhu.github.io/OMLA/.
Problem

Research questions and friction points this paper is trying to address.

Adapt pretrained robotic policies to unseen tasks
Enable knowledge transfer between different tasks
Improve adaptation performance with meta-learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online Meta-Learned Adapters for continual adaptation
Parameter-efficient fine-tuning with lightweight adapters
Meta-learning objective for knowledge transfer between tasks
πŸ”Ž Similar Papers
No similar papers found.