🤖 AI Summary
This work addresses catastrophic forgetting in continual reinforcement learning by proposing a dual-module architecture inspired by the hippocampal–cortical interaction in the human brain. The framework comprises a fast learner that facilitates knowledge transfer and a meta-learner that incrementally integrates new experiences while explicitly mitigating forgetting. A key innovation lies in the introduction of a meta-learning mechanism that explicitly minimizes forgetting, complemented by an adaptive meta-warmup strategy to enhance knowledge accumulation and reuse. Experimental evaluations on pixel-level and continuous control benchmark tasks demonstrate that the proposed method significantly outperforms existing baselines, exhibiting superior continual learning capability and environmental adaptability.
📝 Abstract
Inspired by the human learning and memory system, particularly the interplay between the hippocampus and cerebral cortex, this study proposes a dual-learner framework comprising a fast learner and a meta learner to address continual Reinforcement Learning~(RL) problems. These two learners are coupled to perform distinct yet complementary roles: the fast learner focuses on knowledge transfer, while the meta learner ensures knowledge integration. In contrast to traditional multi-task RL approaches that share knowledge through average return maximization, our meta learner incrementally integrates new experiences by explicitly minimizing catastrophic forgetting, thereby supporting efficient cumulative knowledge transfer for the fast learner. To facilitate rapid adaptation in new environments, we introduce an adaptive meta warm-up mechanism that selectively harnesses past knowledge. We conduct experiments in various pixel-based and continuous control benchmarks, revealing the superior performance of continual learning for our proposed dual-learner approach relative to baseline methods. The code is released in https://github.com/datake/FAME.