StreamFusion: Scalable Sequence Parallelism for Distributed Inference of Diffusion Transformers on GPUs

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high latency and substantial activation memory overhead of single-GPU inference with Diffusion Transformers (DiTs) in high-resolution image and long-video generation, where existing sequence parallelism approaches suffer from inefficient communication patterns, all-to-all inter-node bottlenecks, and excessive synchronization costs. To overcome these limitations, we propose StreamFusion, an efficient DiT inference engine tailored for modern GPU cluster topologies. StreamFusion introduces topology-aware sequence parallelism, a Torus Attention mechanism that enables computation-communication overlap, and a low-synchronization implementation based on one-sided communication. Experimental results demonstrate that StreamFusion achieves an average speedup of 1.35× over state-of-the-art methods, with peak improvements reaching 1.77×, significantly enhancing distributed inference efficiency.

Technology Category

Application Category

📝 Abstract
Diffusion Transformers (DiTs) have gained increasing adoption in high-quality image and video generation. As demand for higher-resolution images and longer videos increases, single-GPU inference becomes inefficient due to increased latency and large activation sizes. Current frameworks employ sequence parallelism (SP) techniques such as Ulysses Attention and Ring Attention to scale inference. However, these implementations have three primary limitations: (1) suboptimal communication patterns for network topologies on modern GPU machines, (2) latency bottlenecks from all-to-all operations in inter-machine communication, and (3) GPU sender-receiver synchronization and computation overheads from using two-sided communication libraries. To address these issues, we present StreamFusion, a topology-aware efficient DiT serving engine. StreamFusion incorporates three key innovations: (1) a topology-aware sequence parallelism technique that accounts for inter- and intra-machine bandwidth differences, (2) Torus Attention, a novel SP technique enabling overlapping of inter-machine all-to-all operations with computation, and (3) a one-sided communication implementation that minimizes GPU sender-receiver synchronization and computation overheads. Our experiments demonstrate that StreamFusion outperforms the state-of-the-art approach by an average of $1.35\times$ (up to $1.77\times$).
Problem

Research questions and friction points this paper is trying to address.

Diffusion Transformers
sequence parallelism
distributed inference
GPU communication
all-to-all operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sequence Parallelism
Torus Attention
One-sided Communication
Topology-aware
Diffusion Transformers
🔎 Similar Papers
No similar papers found.