TEAM: Temporal-Spatial Consistency Guided Expert Activation for MoE Diffusion Language Model Acceleration

πŸ“… 2026-02-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the high computational overhead and latency in Mixture-of-Experts (MoE) diffusion language models, where numerous experts are activated during denoising despite only a small fraction of generated tokens being ultimately accepted. To tackle this inefficiency, the authors propose TEAM, a plug-in framework that exploits previously unrecognized spatiotemporal consistency in expert routingβ€”both across denoising timesteps (temporal) and token positions (spatial). TEAM integrates three synergistic strategies: spatiotemporal consistency-guided expert selection, conservative expert activation, and aggressive multi-candidate speculative decoding. This approach substantially reduces the number of activated experts while significantly improving the effective token acceptance rate. Evaluated across multiple benchmarks, TEAM achieves up to 2.2Γ— inference speedup with negligible degradation in generation quality.

Technology Category

Application Category

πŸ“ Abstract
Diffusion large language models (dLLMs) have recently gained significant attention due to their inherent support for parallel decoding. Building on this paradigm, Mixture-of-Experts (MoE) dLLMs with autoregressive (AR) initialization have further demonstrated strong performance competitive with mainstream AR models. However, we identify a fundamental mismatch between MoE architectures and diffusion-based decoding. Specifically, a large number of experts are activated at each denoising step, while only a small subset of tokens is ultimately accepted, resulting in substantial inference overhead and limiting their deployment in latency-sensitive applications. In this work, we propose TEAM, a plug-and-play framework that accelerates MoE dLLMs by enabling more accepted tokens with fewer activated experts. TEAM is motivated by the observation that expert routing decisions exhibit strong temporal consistency across denoising levels as well as spatial consistency across token positions. Leveraging these properties, TEAM employs three complementary expert activation and decoding strategies, conservatively selecting necessary experts for decoded and masked tokens and simultaneously performing aggressive speculative exploration across multiple candidates. Experimental results demonstrate that TEAM achieves up to 2.2x speedup over vanilla MoE dLLM, with negligible performance degradation. Code is released at https://github.com/PKU-SEC-Lab/TEAM-MoE-dLLM.
Problem

Research questions and friction points this paper is trying to address.

Mixture-of-Experts
diffusion language models
expert activation
inference overhead
temporal-spatial consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal-Spatial Consistency
Expert Activation
MoE Diffusion Language Model
Speculative Decoding
Parallel Decoding Acceleration
πŸ”Ž Similar Papers
No similar papers found.