Sparse Masked Attention Policies for Reliable Generalization

๐Ÿ“… 2026-02-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limited generalization of existing state abstraction methods in reinforcement learning to unseen environments, which hinders effective policy transfer. To overcome this limitation, the authors propose a learnable sparse masking mechanism that is deeply integrated with attention weights to dynamically filter out redundant information from observations, thereby yielding more robust state representations. The method is embedded within an attention-based policy network and optimized under the Proximal Policy Optimization (PPO) framework. Experimental results on the Procgen benchmark demonstrate that the proposed approach significantly outperforms standard PPO and existing masking strategies, achieving notably improved generalization performance on unseen tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
In reinforcement learning, abstraction methods that remove unnecessary information from the observation are commonly used to learn policies which generalize better to unseen tasks. However, these methods often overlook a crucial weakness: the function which extracts the reduced-information representation has unknown generalization ability in unseen observations. In this paper, we address this problem by presenting an information removal method which more reliably generalizes to new states. We accomplish this by using a learned masking function which operates on, and is integrated with, the attention weights within an attention-based policy network. We demonstrate that our method significantly improves policy generalization to unseen tasks in the Procgen benchmark compared to standard PPO and masking approaches.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
generalization
abstraction
observation representation
masked attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Masked Attention
Policy Generalization
Reinforcement Learning
Attention Mechanism
Information Removal
๐Ÿ”Ž Similar Papers
No similar papers found.
C
Caroline Horsch
Department of Intelligent Systems, Delft University of Technology, Delft, Netherlands
L
Laurens Engwegen
Department of Intelligent Systems, Delft University of Technology, Delft, Netherlands
M
Max Weltevrede
Department of Intelligent Systems, Delft University of Technology, Delft, Netherlands
Matthijs T. J. Spaan
Matthijs T. J. Spaan
Delft University of Technology
Wendelin Bรถhmer
Wendelin Bรถhmer
Sequential Decision Making Group, Delft University of Technology
artificial intelligencemachine learningreinforcement learningmulti-agent systems