🤖 AI Summary
This paper addresses goal misgeneralization in reinforcement learning—where policies optimize a proxy objective during training but deviate from the designer’s intended objective in novel environments. To mitigate this, we propose the Minimax Expected Regret (MMER) criterion, which models environmental uncertainty from a robust decision-making perspective, theoretically preventing goal misgeneralization—unlike the conventional Maximize Expected Value (MEV) objective. We systematically evaluate domain randomization and regret-based Unsupervised Environment Design (UED) under both MEV and MMER frameworks, combining theoretical analysis with experiments on procedurally generated grid worlds. Results demonstrate that standard MEV-based methods suffer from significant goal misgeneralization, whereas MMER-driven UED substantially improves cross-environment consistency with the intended objective. This validates MMER as an effective new paradigm for robust goal alignment.
📝 Abstract
Safe generalization in reinforcement learning requires not only that a learned policy acts capably in new situations, but also that it uses its capabilities towards the pursuit of the designer's intended goal. The latter requirement may fail when a proxy goal incentivizes similar behavior to the intended goal within the training environment, but not in novel deployment environments. This creates the risk that policies will behave as if in pursuit of the proxy goal, rather than the intended goal, in deployment -- a phenomenon known as goal misgeneralization. In this paper, we formalize this problem setting in order to theoretically study the possibility of goal misgeneralization under different training objectives. We show that goal misgeneralization is possible under approximate optimization of the maximum expected value (MEV) objective, but not the minimax expected regret (MMER) objective. We then empirically show that the standard MEV-based training method of domain randomization exhibits goal misgeneralization in procedurally-generated grid-world environments, whereas current regret-based unsupervised environment design (UED) methods are more robust to goal misgeneralization (though they don't find MMER policies in all cases). Our findings suggest that minimax expected regret is a promising approach to mitigating goal misgeneralization.