🤖 AI Summary
This paper investigates zero-shot generalization in offline reinforcement learning: learning a policy from multi-environment offline datasets that performs efficiently in unseen test environments without any online interaction. We establish the first formal theoretical framework for this problem and propose two algorithms—Pessimistic Empirical Risk Minimization (PERM) and Pessimistic Proximal Policy Optimization (PPPO). PERM integrates conservative policy evaluation with empirical risk minimization, while PPPO incorporates pessimistic constraints directly into the proximal policy optimization pipeline. We provide rigorous theoretical guarantees showing that both algorithms converge to near-optimal policies across environments, ensuring provable zero-shot generalization. This work delivers the first theoretically grounded foundation for generalization in offline RL, overcoming key limitations of prior approaches—which either rely on environment-specific assumptions or require online fine-tuning.
📝 Abstract
In this work, we study offline reinforcement learning (RL) with zero-shot generalization property (ZSG), where the agent has access to an offline dataset including experiences from different environments, and the goal of the agent is to train a policy over the training environments which performs well on test environments without further interaction. Existing work showed that classical offline RL fails to generalize to new, unseen environments. We propose pessimistic empirical risk minimization (PERM) and pessimistic proximal policy optimization (PPPO), which leverage pessimistic policy evaluation to guide policy learning and enhance generalization. We show that both PERM and PPPO are capable of finding a near-optimal policy with ZSG. Our result serves as a first step in understanding the foundation of the generalization phenomenon in offline reinforcement learning.