๐ค AI Summary
This work addresses the limited generalization of existing state abstraction methods in reinforcement learning to unseen environments, which hinders effective policy transfer. To overcome this limitation, the authors propose a learnable sparse masking mechanism that is deeply integrated with attention weights to dynamically filter out redundant information from observations, thereby yielding more robust state representations. The method is embedded within an attention-based policy network and optimized under the Proximal Policy Optimization (PPO) framework. Experimental results on the Procgen benchmark demonstrate that the proposed approach significantly outperforms standard PPO and existing masking strategies, achieving notably improved generalization performance on unseen tasks.
๐ Abstract
In reinforcement learning, abstraction methods that remove unnecessary information from the observation are commonly used to learn policies which generalize better to unseen tasks. However, these methods often overlook a crucial weakness: the function which extracts the reduced-information representation has unknown generalization ability in unseen observations. In this paper, we address this problem by presenting an information removal method which more reliably generalizes to new states. We accomplish this by using a learned masking function which operates on, and is integrated with, the attention weights within an attention-based policy network. We demonstrate that our method significantly improves policy generalization to unseen tasks in the Procgen benchmark compared to standard PPO and masking approaches.