Hierarchical Subspaces of Policies for Continual Offline Reinforcement Learning

📅 2024-12-19
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses catastrophic forgetting and scalability challenges in continual offline reinforcement learning, specifically for navigation tasks under evolving topological and kinodynamic constraints. We propose HiSPO, a hierarchical framework that—uniquely—models policy parameters as reusable and extensible neural subspaces. HiSPO achieves continual knowledge accumulation and reuse through three core mechanisms: (1) policy decomposition into task-specific subspaces, (2) orthogonal subspace constraints to prevent interference, and (3) a hierarchical task routing mechanism enabling adaptive architecture growth without modifying existing components. Evaluated on MuJoCo maze navigation and video-game-scale navigation simulations, HiSPO reduces memory footprint by 37%, improves average task accuracy by 21%, and decreases forgetting rate by 58% relative to state-of-the-art baselines. These results demonstrate substantial gains in both stability and scalability of continual learning for embodied navigation.

Technology Category

Application Category

📝 Abstract
We consider a Continual Reinforcement Learning setup, where a learning agent must continuously adapt to new tasks while retaining previously acquired skill sets, with a focus on the challenge of avoiding forgetting past gathered knowledge and ensuring scalability with the growing number of tasks. Such issues prevail in autonomous robotics and video game simulations, notably for navigation tasks prone to topological or kinematic changes. To address these issues, we introduce HiSPO, a novel hierarchical framework designed specifically for continual learning in navigation settings from offline data. Our method leverages distinct policy subspaces of neural networks to enable flexible and efficient adaptation to new tasks while preserving existing knowledge. We demonstrate, through a careful experimental study, the effectiveness of our method in both classical MuJoCo maze environments and complex video game-like navigation simulations, showcasing competitive performances and satisfying adaptability with respect to classical continual learning metrics, in particular regarding the memory usage and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Avoid forgetting past knowledge in continual reinforcement learning
Ensure scalability with increasing number of tasks
Adapt to new navigation tasks from offline data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical policy subspaces for continual learning
Offline data adaptation with neural networks
Efficient memory usage in navigation tasks
🔎 Similar Papers