Eau De $Q$-Network: Adaptive Distillation of Neural Networks in Deep Reinforcement Learning

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In deep reinforcement learning, existing dense-to-sparse conversion methods rely on hand-crafted sparsification schedules that poorly align with agent learning dynamics; moreover, final sparsity levels require empirical tuning, often degrading performance. To address this, we propose AdaptSparse—a learning-progress-driven adaptive sparsification mechanism. Its core comprises parallel online networks trained concurrently, with the network exhibiting minimal real-time loss dynamically selected as the target network, while others undergo pruning-based knowledge distillation. AdaptSparse is natively integrated into the DQN framework, unifying the modeling of online/target network architectures, dynamic selection policies, structured pruning, and knowledge distillation. Evaluated on Atari 2600 and MuJoCo benchmarks, AdaptSparse maintains near-dense-model performance even at >90% sparsity—significantly outperforming fixed-schedule baselines—and achieves, for the first time, co-optimization of sparsification and RL training dynamics.

Technology Category

Application Category

📝 Abstract
Recent works have successfully demonstrated that sparse deep reinforcement learning agents can be competitive against their dense counterparts. This opens up opportunities for reinforcement learning applications in fields where inference time and memory requirements are cost-sensitive or limited by hardware. Until now, dense-to-sparse methods have relied on hand-designed sparsity schedules that are not synchronized with the agent's learning pace. Crucially, the final sparsity level is chosen as a hyperparameter, which requires careful tuning as setting it too high might lead to poor performances. In this work, we address these shortcomings by crafting a dense-to-sparse algorithm that we name Eau De $Q$-Network (EauDeQN). To increase sparsity at the agent's learning pace, we consider multiple online networks with different sparsity levels, where each online network is trained from a shared target network. At each target update, the online network with the smallest loss is chosen as the next target network, while the other networks are replaced by a pruned version of the chosen network. We evaluate the proposed approach on the Atari $2600$ benchmark and the MuJoCo physics simulator, showing that EauDeQN reaches high sparsity levels while keeping performances high.
Problem

Research questions and friction points this paper is trying to address.

Develops adaptive dense-to-sparse algorithm for reinforcement learning.
Synchronizes sparsity with agent's learning pace automatically.
Achieves high sparsity without performance loss in benchmarks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive sparsity synchronization with learning pace
Multiple online networks with shared target
Pruning-based network selection for optimal sparsity
🔎 Similar Papers
No similar papers found.