🤖 AI Summary
Weak zero-shot generalization and susceptibility to task-irrelevant pixel perturbations—causing representation bias—are key challenges in visual reinforcement learning. To address these, this paper proposes a saliency-guided robust representation learning framework. Its core contributions are: (1) a saliency-guided value-consistency module that jointly optimizes visual representations and value-function prediction; (2) a dynamics-aware representation learning mechanism to improve environmental dynamics modeling; and (3) a KL-constrained policy-consistency regularizer, theoretically guaranteeing cross-environment policy robustness. The method integrates saliency masking, observation perturbation augmentation, and multi-objective co-optimization. Evaluated on DMC-GB, Robotic Manipulation, and CARLA benchmarks, it achieves average performance improvements of 14%, 39%, and 69%, respectively, significantly outperforming state-of-the-art methods.
📝 Abstract
Generalizing policies to unseen scenarios remains a critical challenge in visual reinforcement learning, where agents often overfit to the specific visual observations of the training environment. In unseen environments, distracting pixels may lead agents to extract representations containing task-irrelevant information. As a result, agents may deviate from the optimal behaviors learned during training, thereby hindering visual generalization.To address this issue, we propose the Salience-Invariant Consistent Policy Learning (SCPL) algorithm, an efficient framework for zero-shot generalization. Our approach introduces a novel value consistency module alongside a dynamics module to effectively capture task-relevant representations. The value consistency module, guided by saliency, ensures the agent focuses on task-relevant pixels in both original and perturbed observations, while the dynamics module uses augmented data to help the encoder capture dynamic- and reward-relevant representations. Additionally, our theoretical analysis highlights the importance of policy consistency for generalization. To strengthen this, we introduce a policy consistency module with a KL divergence constraint to maintain consistent policies across original and perturbed observations.Extensive experiments on the DMC-GB, Robotic Manipulation, and CARLA benchmarks demonstrate that SCPL significantly outperforms state-of-the-art methods in terms of generalization. Notably, SCPL achieves average performance improvements of 14%, 39%, and 69% in the challenging DMC video hard setting, the Robotic hard setting, and the CARLA benchmark, respectively.Project Page: https://sites.google.com/view/scpl-rl.