🤖 AI Summary
This work addresses the instability and conservative updates in standard on-policy reinforcement learning caused by gradient noise in Gaussian policies. To mitigate these issues, the authors propose a novel approach that models continuous action spaces as discrete categorical distributions, integrating a regularized policy network with a fixed-structure critic and employing a cross-entropy-like policy objective. This framework yields a more robust on-policy optimization process. Empirical evaluations across multiple continuous control benchmarks demonstrate consistent and significant performance improvements, achieving state-of-the-art results while substantially enhancing training stability and sample efficiency.
📝 Abstract
On-policy deep reinforcement learning remains a dominant paradigm for continuous control, yet standard implementations rely on Gaussian actors and relatively shallow MLP policies, often leading to brittle optimization when gradients are noisy and policy updates must be conservative. In this paper, we revisit policy representation as a first-class design choice for on-policy optimization. We study discretized categorical actors that represent each action dimension with a distribution over bins, yielding a policy objective that resembles a cross-entropy loss. Building on architectural advances from supervised learning, we further propose regularized actor networks, while keeping critic design fixed. Our results show that simply replacing the standard actor network with our discretized regularized actor yields consistent gains and achieve the state-of-the-art performance across diverse continuous-control benchmarks.