π€ AI Summary
Deep reinforcement learning (DRL) faces a critical bottleneck in physical system deployment: the absence of verifiable, tight quantification of policy risk. This paper introduces the first offline risk certification framework for DRL grounded in recursive PAC-Bayes theory, enabling efficient computation of high-confidence, tight generalization risk upper bounds using only a small number of rollout trajectories from pretrained policies. Our core methodological innovations include (i) the first application of recursive PAC-Bayes to DRL policy verification, (ii) a data-driven prior construction scheme, and (iii) an Actor-Criticβaware adaptation mechanism that ensures compatibility with standard policy-gradient architectures. Empirical evaluation on multi-task robotic control benchmarks demonstrates that our certified risk bounds are over 40% tighter than those produced by conventional methods, markedly improving deployment safety and practical verifiability.
π Abstract
After a period of research, deep actor-critic algorithms have reached a level where they influence our everyday lives. They serve as the driving force behind the continual improvement of large language models through user-collected feedback. However, their deployment in physical systems is not yet widely adopted, mainly because no validation scheme that quantifies their risk of malfunction. We demonstrate that it is possible to develop tight risk certificates for deep actor-critic algorithms that predict generalization performance from validation-time observations. Our key insight centers on the effectiveness of minimal evaluation data. Surprisingly, a small feasible of evaluation roll-outs collected from a pretrained policy suffices to produce accurate risk certificates when combined with a simple adaptation of PAC-Bayes theory. Specifically, we adopt a recently introduced recursive PAC-Bayes approach, which splits validation data into portions and recursively builds PAC-Bayes bounds on the excess loss of each portion's predictor, using the predictor from the previous portion as a data-informed prior. Our empirical results across multiple locomotion tasks and policy expertise levels demonstrate risk certificates that are tight enough to be considered for practical use.