🤖 AI Summary
Current continual reinforcement learning (CRL) evaluation relies on lifelong hyperparameter tuning (e.g., 200M frames), contradicting CRL’s core objective of infinite adaptation and yielding low algorithmic discriminability and weak practical deployability.
Method: We identify this paradigmatic flaw and propose a *k%-budget evaluation* standard—restricting hyperparameter optimization to only *k%* (e.g., 1–10%) of total task-sequence data—to emulate real-world resource constraints. We formally define, for the first time in CRL, an empirical evaluation framework under strict data-budget constraints.
Contribution/Results: We empirically demonstrate that plasticity-preserving mechanisms—including elastic weight consolidation, dynamic network expansion, and gradient regularization—are decisive for robustness under ultra-sparse tuning. Evaluated atop DQN and SAC, plasticity-enhanced variants significantly outperform baselines under extreme data scarcity, establishing both theoretical grounding and practical benchmarks for redefining CRL evaluation and algorithm design.
📝 Abstract
In continual or lifelong reinforcement learning, access to the environment should be limited. If we aspire to design algorithms that can run for long periods, continually adapting to new, unexpected situations, then we must be willing to deploy our agents without tuning their hyperparameters over the agent's entire lifetime. The standard practice in deep RL, and even continual RL, is to assume unfettered access to the deployment environment for the full lifetime of the agent. In this paper, we propose a new approach for evaluating lifelong RL agents where only k percent of the experiment data can be used for hyperparameter tuning. We then conduct an empirical study of DQN and SAC across a variety of continuing and non-stationary domains. We find agents generally perform poorly when restricted to k-percent tuning, whereas several algorithmic mitigations designed to maintain network plasticity perform surprisingly well.