🤖 AI Summary
Diffusion model inference across multiple accelerators suffers from high inter-device activation transfer overhead and poor scalability. This work identifies, for the first time, strong temporal redundancy in diffusion model activations across denoising steps and proposes a residual compression communication mechanism: only the activation differences between consecutive denoising steps are transmitted, augmented with a lightweight error-feedback scheme to suppress accumulated distortion. The method substantially reduces communication volume and naturally integrates with high-communication paradigms such as sequence parallelism. On a 4×L20 platform, it achieves 3.0× end-to-end inference speedup with improved generation fidelity; under slow-network conditions, it outperforms baselines by 6.7×, while remaining compatible with mainstream diffusion models and parallel architectures. The core contribution lies in the first synergistic integration of temporal residual compression and error feedback into diffusion inference communication optimization—enabling efficient, low-distortion, and broadly applicable distributed inference acceleration.
📝 Abstract
Diffusion models produce realistic images and videos but require substantial computational resources, necessitating multi-accelerator parallelism for real-time deployment. However, parallel inference introduces significant communication overhead from exchanging large activations between devices, limiting efficiency and scalability. We present CompactFusion, a compression framework that significantly reduces communication while preserving generation quality. Our key observation is that diffusion activations exhibit strong temporal redundancy-adjacent steps produce highly similar activations, saturating bandwidth with near-duplicate data carrying little new information. To address this inefficiency, we seek a more compact representation that encodes only the essential information. CompactFusion achieves this via Residual Compression that transmits only compressed residuals (step-wise activation differences). Based on empirical analysis and theoretical justification, we show that it effectively removes redundant data, enabling substantial data reduction while maintaining high fidelity. We also integrate lightweight error feedback to prevent error accumulation. CompactFusion establishes a new paradigm for parallel diffusion inference, delivering lower latency and significantly higher generation quality than prior methods. On 4xL20, it achieves 3.0x speedup while greatly improving fidelity. It also uniquely supports communication-heavy strategies like sequence parallelism on slow networks, achieving 6.7x speedup over prior overlap-based method. CompactFusion applies broadly across diffusion models and parallel settings, and integrates easily without requiring pipeline rework. Portable implementation demonstrated on xDiT is publicly available at https://github.com/Cobalt-27/CompactFusion