🤖 AI Summary
Diffusion-based image generation faces bottlenecks including high computational overhead, time-consuming prompt tuning, and excessive cloud resource demand. To address these, we propose the first edge–cloud collaborative framework supporting multi-round prompt evolution: a lightweight diffusion model on the edge rapidly generates coarse previews, while a large-scale model in the cloud performs fine-grained refinement. We introduce a novel noise-level predictor that dynamically allocates computational tasks to optimize the trade-off between end-to-end latency and cloud load. Experiments show that our framework reduces average generation time by 15.8% over Stable Diffusion v1.5 with comparable image quality (FID and CLIP-Score), and incurs only 0.9% higher latency than Tiny-SD while significantly improving FID. This work is the first to deeply integrate iterative prompt refinement into the edge–cloud generation pipeline, achieving a balanced design across efficiency, fidelity, and scalability.
📝 Abstract
Recent advances in diffusion models have driven remarkable progress in image generation. However, the generation process remains computationally intensive, and users often need to iteratively refine prompts to achieve the desired results, further increasing latency and placing a heavy burden on cloud resources. To address this challenge, we propose DiffusionX, a cloud-edge collaborative framework for efficient multi-round, prompt-based generation. In this system, a lightweight on-device diffusion model interacts with users by rapidly producing preview images, while a high-capacity cloud model performs final refinements after the prompt is finalized. We further introduce a noise level predictor that dynamically balances the computation load, optimizing the trade-off between latency and cloud workload. Experiments show that DiffusionX reduces average generation time by 15.8% compared with Stable Diffusion v1.5, while maintaining comparable image quality. Moreover, it is only 0.9% slower than Tiny-SD with significantly improved image quality, thereby demonstrating efficiency and scalability with minimal overhead.