π€ AI Summary
To address the high latency and resource overhead associated with deploying generative diffusion models (GDMs) in mobile edge networks, this paper proposes a joint service placement and multi-user access optimization framework. Methodologically, it (1) dynamically partitions denoising blocks across heterogeneous edge nodes; (2) introduces an adaptive inference step reduction mechanism to alleviate both communication and computational loads; and (3) designs a Double Dueling Deep Q-Network (D3QN)-based service placement algorithm, integrated with a greedy access policy and the LEARN-GDM frameworkβa deep reinforcement learning-driven orchestration scheme. Experimental results demonstrate that, compared to monolithic and fixed-chain deployments, the proposed approach significantly improves scalability and latency robustness while maintaining quality-of-service guarantees, thereby substantially enhancing the efficiency of edge-hosted GDM inference.
π Abstract
Generative Diffusion Models (GDMs) have emerged as key components of Generative Artificial Intelligence (GenAI), offering unparalleled expressiveness and controllability for complex data generation tasks. However, their deployment in real-time and mobile environments remains challenging due to the iterative and resource-intensive nature of the inference process. Addressing these challenges, this paper introduces a unified optimization framework that jointly tackles service placement and multiple access control for GDMs in mobile edge networks. We propose LEARN-GDM, a Deep Reinforcement Learning-based algorithm that dynamically partitions denoising blocks across heterogeneous edge nodes, while accounting for latent transmission costs and enabling adaptive reduction of inference steps. Our approach integrates a greedy multiple access scheme with a Double and Dueling Deep Q-Learning (D3QL)-based service placement, allowing for scalable, adaptable, and resource-efficient operation under stringent quality of service requirements. Simulations demonstrate the superior performance of the proposed framework in terms of scalability and latency resilience compared to conventional monolithic and fixed chain-length placement strategies. This work advances the state of the art in edge-enabled GenAI by offering an adaptable solution for GDM services orchestration, paving the way for future extensions toward semantic networking and co-inference across distributed environments.