🤖 AI Summary
This paper addresses the dual challenge in continual reinforcement learning (CRL) of rapidly adapting to new tasks while robustly retaining knowledge from previously encountered tasks. Methodologically, it is the first to systematically integrate PAC-Bayes theory into CRL, introducing an evolvable shared policy distribution—termed the “world policy”—and optimizing it via empirical risk minimization. It establishes a quantitative relationship between generalization performance and the number of tasks retained in memory, and derives, for the first time, a regret-based upper bound on sample complexity. Key contributions include: (1) the rigorous embedding of the PAC-Bayes framework into CRL, yielding provable generalization guarantees; (2) a theoretical characterization of the trade-off between memory capacity and adaptation efficiency; and (3) superior empirical performance across diverse dynamic task environments, demonstrating both theoretical rigor and practical effectiveness.
📝 Abstract
Lifelong reinforcement learning (RL) has been developed as a paradigm for extending single-task RL to more realistic, dynamic settings. In lifelong RL, the"life"of an RL agent is modeled as a stream of tasks drawn from a task distribution. We propose EPIC (Empirical PAC-Bayes that Improves Continuously), a novel algorithm designed for lifelong RL using PAC-Bayes theory. EPIC learns a shared policy distribution, referred to as the world policy, which enables rapid adaptation to new tasks while retaining valuable knowledge from previous experiences. Our theoretical analysis establishes a relationship between the algorithm's generalization performance and the number of prior tasks preserved in memory. We also derive the sample complexity of EPIC in terms of RL regret. Extensive experiments on a variety of environments demonstrate that EPIC significantly outperforms existing methods in lifelong RL, offering both theoretical guarantees and practical efficacy through the use of the world policy.