🤖 AI Summary
Reinforcement learning agents often exhibit poor generalization to unseen environments due to overfitting to training conditions. To address this, we propose the first method that predicts agent generalization performance directly from neural network weight signals and integrates this prediction into the PPO objective for generalization-aware policy optimization. Our key contributions are: (1) a differentiable weight-feature extraction module that maps model parameters to a scalar generalization score; and (2) a generalization-aware regularization term incorporated into the PPO loss, which explicitly encourages learning of robust, environment-invariant representations. Experiments across diverse generalization benchmarks—including visual-observation domains (ProcGen) and dynamics-shift settings (MultiRoom)—demonstrate substantial improvements in cross-environment performance: our method achieves an average generalization score 23.6% higher than standard PPO, without requiring environmental augmentation, domain randomization, or auxiliary supervision.
📝 Abstract
Generalizability of Reinforcement Learning (RL) agents (ability to perform on environments different from the ones they have been trained on) is a key problem as agents have the tendency to overfit to their training environments. In order to address this problem and offer a solution to increase the generalizability of RL agents, we introduce a new methodology to predict the generalizability score of RL agents based on the internal weights of the agent's neural networks. Using this prediction capability, we propose some changes in the Proximal Policy Optimization (PPO) loss function to boost the generalization score of the agents trained with this upgraded version. Experimental results demonstrate that our improved PPO algorithm yields agents with stronger generalizability compared to the original version.