🤖 AI Summary
In reinforcement learning, real-world rewards are often corrupted by unknown disturbances—such as adversarial attacks, sensor noise, or subjective human feedback—degrading policy performance. Existing robust RL methods typically rely on strong assumptions, including prior knowledge of disturbances, access to clean rewards, or invariance of the optimal policy under corruption, limiting their practical applicability. This paper proposes the Distributional Reward Critic (DRC) framework, the first approach that jointly models the true reward distribution and the corruption pattern under general, irreversible, non-improving, and completely unknown disturbances—without requiring clean rewards, disturbance priors, or parametric assumptions about the corruption mechanism. DRC is algorithm-agnostic and integrates seamlessly with any RL method. Empirically, across 48 diverse perturbation settings, DRC achieves the best or tied-best return in 44 cases, significantly outperforming baselines (which win in only 11 cases), while maintaining stable—or even improved—performance in clean environments.
📝 Abstract
The reward signal plays a central role in defining the desired behaviors of agents in reinforcement learning (RL). Rewards collected from realistic environments could be perturbed, corrupted, or noisy due to an adversary, sensor error, or because they come from subjective human feedback. Thus, it is important to construct agents that can learn under such rewards. Existing methodologies for this problem make strong assumptions, including that the perturbation is known in advance, clean rewards are accessible, or that the perturbation preserves the optimal policy. We study a new, more general, class of unknown perturbations, and introduce a distributional reward critic framework for estimating reward distributions and perturbations during training. Our proposed methods are compatible with any RL algorithm. Despite their increased generality, we show that they achieve comparable or better rewards than existing methods in a variety of environments, including those with clean rewards. Under the challenging and generalized perturbations we study, we win/tie the highest return in 44/48 tested settings (compared to 11/48 for the best baseline). Our results broaden and deepen our ability to perform RL in reward-perturbed environments.