Improve the Training Efficiency of DRL for Wireless Communication Resource Allocation: The Role of Generative Diffusion Models

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low training efficiency of deep reinforcement learning (DRL) and high retraining overhead under dynamic environmental changes in wireless communication resource allocation, this paper proposes a Diffusion-Augmented Deep Reinforcement Learning (D²RL) framework. D²RL is the first to systematically integrate generative diffusion models (GDMs) into three core DRL components: state representation, action exploration, and reward design. It introduces a novel dual-mode operational mechanism: Mode I optimizes a discriminative reward function, while Mode II employs iterative denoising to generate diverse state samples, thereby enhancing policy generalization. Evaluated across multiple resource allocation tasks, D²RL achieves over 40% faster convergence and reduces computational cost by 35%, while matching the performance of state-of-the-art DRL algorithms—including PPO and SAC—and enabling real-time deployment on edge devices.

Technology Category

Application Category

📝 Abstract
Dynamic resource allocation in mobile wireless networks involves complex, time-varying optimization problems, motivating the adoption of deep reinforcement learning (DRL). However, most existing works rely on pre-trained policies, overlooking dynamic environmental changes that rapidly invalidate the policies. Periodic retraining becomes inevitable but incurs prohibitive computational costs and energy consumption-critical concerns for resource-constrained wireless systems. We identify three root causes of inefficient retraining: high-dimensional state spaces, suboptimal action spaces exploration-exploitation trade-offs, and reward design limitations. To overcome these limitations, we propose Diffusion-based Deep Reinforcement Learning (D2RL), which leverages generative diffusion models (GDMs) to holistically enhance all three DRL components. Iterative refinement process and distribution modelling of GDMs enable (1) the generation of diverse state samples to improve environmental understanding, (2) balanced action space exploration to escape local optima, and (3) the design of discriminative reward functions that better evaluate action quality. Our framework operates in two modes: Mode I leverages GDMs to explore reward spaces and design discriminative reward functions that rigorously evaluate action quality, while Mode II synthesizes diverse state samples to enhance environmental understanding and generalization. Extensive experiments demonstrate that D2RL achieves faster convergence and reduced computational costs over conventional DRL methods for resource allocation in wireless communications while maintaining competitive policy performance. This work underscores the transformative potential of GDMs in overcoming fundamental DRL training bottlenecks for wireless networks, paving the way for practical, real-time deployments.
Problem

Research questions and friction points this paper is trying to address.

Enhance DRL training efficiency
Optimize wireless resource allocation
Leverage generative diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses generative diffusion models
Enhances DRL component efficiency
Reduces computational training costs
🔎 Similar Papers
No similar papers found.