Offline Reinforcement Learning with Penalized Action Noise Injection

๐Ÿ“… 2025-07-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Offline reinforcement learning (RL) is constrained by fixed datasets, making generalization a key performance bottleneck; while diffusion-based approaches improve generalization, they incur substantial inference overhead. To address this, we propose Penalized Noisy Action Injection (PANI), a lightweight method that injects penalty-regularized noise into the action spaceโ€”enhancing policy generalization without requiring complex generative models. We theoretically establish that PANI is equivalent to solving a novel class of noisy-action Markov decision processes (MDPs). PANI is modular and can be seamlessly integrated into diverse offline RL algorithms. Empirical evaluation across multiple benchmark tasks demonstrates consistent and significant performance gains, while maintaining low inference cost. Our core contributions are threefold: (i) a computationally efficient action-space augmentation mechanism; (ii) a rigorous theoretical characterization linking this mechanism to a well-defined MDP formulation; and (iii) comprehensive validation of its broad applicability and practical effectiveness in offline RL.

Technology Category

Application Category

๐Ÿ“ Abstract
Offline reinforcement learning (RL) optimizes a policy using only a fixed dataset, making it a practical approach in scenarios where interaction with the environment is costly. Due to this limitation, generalization ability is key to improving the performance of offline RL algorithms, as demonstrated by recent successes of offline RL with diffusion models. However, it remains questionable whether such diffusion models are necessary for highly performing offline RL algorithms, given their significant computational requirements during inference. In this paper, we propose Penalized Action Noise Injection (PANI), a method that simply enhances offline learning by utilizing noise-injected actions to cover the entire action space, while penalizing according to the amount of noise injected. This approach is inspired by how diffusion models have worked in offline RL algorithms. We provide a theoretical foundation for this method, showing that offline RL algorithms with such noise-injected actions solve a modified Markov Decision Process (MDP), which we call the noisy action MDP. PANI is compatible with a wide range of existing off-policy and offline RL algorithms, and despite its simplicity, it demonstrates significant performance improvements across various benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing offline RL generalization via noise-injected actions
Reducing computational costs compared to diffusion models
Improving performance across diverse offline RL benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Noise-injected actions cover action space
Penalizes based on noise amount
Modifies MDP for offline RL
๐Ÿ”Ž Similar Papers
No similar papers found.