RN-D: Discretized Categorical Actors with Regularized Networks for On-Policy Reinforcement Learning

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability and conservative updates in standard on-policy reinforcement learning caused by gradient noise in Gaussian policies. To mitigate these issues, the authors propose a novel approach that models continuous action spaces as discrete categorical distributions, integrating a regularized policy network with a fixed-structure critic and employing a cross-entropy-like policy objective. This framework yields a more robust on-policy optimization process. Empirical evaluations across multiple continuous control benchmarks demonstrate consistent and significant performance improvements, achieving state-of-the-art results while substantially enhancing training stability and sample efficiency.

Technology Category

Application Category

📝 Abstract
On-policy deep reinforcement learning remains a dominant paradigm for continuous control, yet standard implementations rely on Gaussian actors and relatively shallow MLP policies, often leading to brittle optimization when gradients are noisy and policy updates must be conservative. In this paper, we revisit policy representation as a first-class design choice for on-policy optimization. We study discretized categorical actors that represent each action dimension with a distribution over bins, yielding a policy objective that resembles a cross-entropy loss. Building on architectural advances from supervised learning, we further propose regularized actor networks, while keeping critic design fixed. Our results show that simply replacing the standard actor network with our discretized regularized actor yields consistent gains and achieve the state-of-the-art performance across diverse continuous-control benchmarks.
Problem

Research questions and friction points this paper is trying to address.

on-policy reinforcement learning
continuous control
policy representation
actor networks
optimization stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

discretized categorical actors
regularized networks
on-policy reinforcement learning
policy representation
continuous control
🔎 Similar Papers
No similar papers found.