Rewiring Experts on the Fly:Continuous Rerouting for Better Online Adaptation in Mixture-of-Expert models

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address suboptimal expert routing in Mixture-of-Experts (MoE) models during deployment under distribution shift, this paper proposes a training-free, data-free online adaptive routing framework. The method dynamically re-routes experts during autoregressive generation based on the already-produced sequence, optimizing router logits via periodic self-supervised updates—only lightweight learnable vectors are adjusted to mitigate overfitting. This constitutes the first plug-and-play, reference-free online routing adaptation for MoE models. Evaluated on OLMoE, it improves HumanEval scores by 5.5%; when integrated with DeepSeek-V2-Lite and self-consistency decoding, it yields an average 6% performance gain, significantly enhancing generation robustness and consistency under complex reasoning and distribution-shift scenarios.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts (MoE) models achieve efficient scaling through sparse expert activation, but often suffer from suboptimal routing decisions due to distribution shifts in deployment. While existing test-time adaptation methods could potentially address these issues, they primarily focus on dense models and require access to external data, limiting their practical applicability to MoE architectures. However, we find that, instead of relying on reference data, we can optimize MoE expert selection on-the-fly based only on input context. As such, we propose extit{a data-free, online test-time framework} that continuously adapts MoE routing decisions during text generation without external supervision or data. Our method cycles between two phases: During the prefill stage, and later in regular intervals, we optimize the routing decisions of the model using self-supervision based on the already generated sequence. Then, we generate text as normal, maintaining the modified router until the next adaption. We implement this through lightweight additive vectors that only update router logits in selected layers, maintaining computational efficiency while preventing over-adaptation. The experimental results show consistent performance gains on challenging reasoning tasks while maintaining robustness to context shifts. For example, our method achieves a 5.5% improvement on HumanEval with OLMoE. Furthermore, owing to its plug-and-play property, our method naturally complements existing test-time scaling techniques, e.g., achieving 6% average gains when incorporated with self-consistency on DeepSeek-V2-Lite.
Problem

Research questions and friction points this paper is trying to address.

Optimizing MoE routing decisions dynamically during text generation
Addressing suboptimal expert selection due to distribution shifts
Enabling data-free online adaptation without external supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-free online adaptation for MoE routing
Self-supervised routing optimization using generated sequence
Lightweight additive vectors update router logits efficiently
🔎 Similar Papers
No similar papers found.