🤖 AI Summary
To address activation staleness—caused by communication latency—in expert-parallel diffusion Mixture-of-Experts (MoE) inference, this paper proposes DICE, a novel optimization framework centered on staleness modeling. DICE integrates three synergistic strategies: (1) interleaved pipelined scheduling, (2) layer-granularity selective synchronization gating, and (3) token-importance-driven dynamic communication pruning—requiring no additional training. It is the first method to achieve step-level staleness reduction by 50% and token-level fine-grained communication control. Evaluated on diffusion MoE models, DICE maintains near-identical FID and CLIP Score while delivering 1.26× end-to-end inference speedup—significantly outperforming state-of-the-art displaced parallelism approaches. The implementation is publicly available.
📝 Abstract
Mixture-of-Experts-based (MoE-based) diffusion models demonstrate remarkable scalability in high-fidelity image generation, yet their reliance on expert parallelism introduces critical communication bottlenecks. State-of-the-art methods alleviate such overhead in parallel diffusion inference through computation-communication overlapping, termed displaced parallelism. However, we identify that these techniques induce severe *staleness*-the usage of outdated activations from previous timesteps that significantly degrades quality, especially in expert-parallel scenarios. We tackle this fundamental tension and propose DICE, a staleness-centric optimization framework with a three-fold approach: (1) Interweaved Parallelism introduces staggered pipelines, effectively halving step-level staleness for free; (2) Selective Synchronization operates at layer-level and protects layers vulnerable from staled activations; and (3) Conditional Communication, a token-level, training-free method that dynamically adjusts communication frequency based on token importance. Together, these strategies effectively reduce staleness, achieving 1.26x speedup with minimal quality degradation. Empirical results establish DICE as an effective and scalable solution. Our code is publicly available at https://anonymous.4open.science/r/DICE-FF04