Efficient Multi-user Offloading of Personalized Diffusion Models: A DRL-Convex Hybrid Solution

📅 2024-11-24
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Deploying personalized diffusion models on edge-based multi-user heterogeneous devices is challenging due to dynamic user scaling, heterogeneous terminal compute capabilities, and volatile edge resource availability. Method: We propose a phased hybrid inference mechanism: the edge server performs coarse-grained batch generation, while lightweight on-device models refine personalized details. We introduce the first joint optimization framework for multi-user split-point selection and offloading scheduling, formulating the generalized quadratic assignment problem (GQAP) as an adaptive Markov decision process (MDP), and solve it via a synergistic combination of deep reinforcement learning and convex optimization—yielding low-complexity near-optimal solutions to this NP-hard problem. Results: Experiments demonstrate significant improvements over baselines in the latency–accuracy trade-off, with 37% higher edge resource utilization and 52% reduction in on-device GPU memory consumption.

Technology Category

Application Category

📝 Abstract
With the impressive generative capabilities of diffusion models, personalized content synthesis has emerged as the most highly anticipated. However, the large model sizes and iterative nature of inference make it difficult to deploy personalized diffusion models broadly on local devices with varying computational power. To this end, we propose a novel framework for efficient multi-user offloading of personalized diffusion models, given a variable number of users, diverse user computational capabilities, and fluctuating available computational resources on the edge server. To enhance computational efficiency and reduce storage burden on edge servers, we first propose a tailored multi-user hybrid inference manner, where the inference process for each user is split into two phases with an optimizable split point. The initial phase of inference is processed on a cluster-wide model using batching techniques, generating low-level semantic information corresponding to each user's prompt. Then, the users employ their own personalized model to add further details in the later inference phase. Given the constraints on edge server computational resources and users' preferences for low latency and high accuracy, we model the joint optimization of each user's offloading request handling and split point as an extension of the Generalized Quadratic Assignment Problem (GQAP). Our objective is to maximize a comprehensive metric that accounts for both latency and accuracy across all users. To tackle this NP-hard problem, we transform the GQAP into an adaptive decision sequence, model it as a Markov decision process, and develop a hybrid solution combining deep reinforcement learning with convex optimization techniques. Simulation results validate the effectiveness of our framework, demonstrating superior optimality and low complexity compared to traditional methods.
Problem

Research questions and friction points this paper is trying to address.

Efficient offloading of personalized diffusion models to edge servers.
Optimizing computational resources for multi-user environments.
Balancing latency and accuracy in model inference processes.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid inference split for multi-user efficiency
DRL-convex optimization for NP-hard problem solving
Batching techniques to reduce edge server storage
🔎 Similar Papers
No similar papers found.