Stable Deep Reinforcement Learning via Isotropic Gaussian Representations

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In non-stationary environments, deep reinforcement learning often suffers from representation degradation and training instability due to time-varying targets and shifting data distributions. This work proposes a lightweight and efficient sketch-based isotropic Gaussian regularization method that encourages the agent to learn isotropic Gaussian representations. Theoretical analysis demonstrates that such representations offer several advantages, including stable tracking of dynamic targets, maximal entropy, and balanced utilization of embedding dimensions. By integrating a linear readout structure within a deep reinforcement learning framework, the proposed approach significantly enhances performance across a variety of tasks while effectively mitigating representation collapse, neuron dormancy, and training instability.

Technology Category

Application Category

📝 Abstract
Deep reinforcement learning systems often suffer from unstable training dynamics due to non-stationarity, where learning objectives and data distributions evolve over time. We show that under non-stationary targets, isotropic Gaussian embeddings are provably advantageous. In particular, they induce stable tracking of time-varying targets for linear readouts, achieve maximal entropy under a fixed variance budget, and encourage a balanced use of all representational dimensions--all of which enable agents to be more adaptive and stable. Building on this insight, we propose the use of Sketched Isotropic Gaussian Regularization for shaping representations toward an isotropic Gaussian distribution during training. We demonstrate empirically, over a variety of domains, that this simple and computationally inexpensive method improves performance under non-stationarity while reducing representation collapse, neuron dormancy, and training instability.
Problem

Research questions and friction points this paper is trying to address.

non-stationarity
training instability
representation collapse
deep reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Isotropic Gaussian Representations
Non-stationarity
Representation Regularization
Deep Reinforcement Learning
Stable Training
🔎 Similar Papers
No similar papers found.