Omni-Router: Sharing Routing Decisions in Sparse Mixture-of-Experts for Speech Recognition

πŸ“… 2025-07-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In speech recognition, sparse Mixture-of-Experts (MoE) models suffer from insufficient expert specialization due to layer-wise independent router decisions and lack of inter-layer coordination. To address this, we propose a **cross-layer shared router mechanism**β€”the first to reuse a single lightweight router across all Transformer layers in MoE architectures, enforcing routing consistency across depth and facilitating structured expert division of labor and collaboration. Our method leverages large-scale pseudo-labeled data for training. On ten mainstream ASR benchmarks, it achieves an average 11.2% relative WER reduction over dense Transformers and an 8.2% reduction over Switch Transformers, with faster convergence and lower training loss. The design significantly improves model robustness and parameter efficiency, establishing a scalable paradigm for inter-layer coordination in speech MoE architectures.

Technology Category

Application Category

πŸ“ Abstract
Mixture-of-experts (MoE) architectures have expanded from language modeling to automatic speech recognition (ASR). Traditional MoE methods, such as the Switch Transformer, route experts independently within each layer. Our analysis reveals that routers in most layers make expert choices that are not strongly correlated with the choices of the routers in other layers. To increase the cooperation between experts in different layers and encourage greater specialization, we use a shared router across different MoE layers. We call this model emph{Omni-router Transformer}. Extensive experiments on a large-scale pseudo-labeled dataset and evaluations across 10 diverse, out-of-domain ASR benchmarks demonstrate that the Omni-router Transformer is able to achieve lower training loss and consistently outperform dense and Switch Transformer models, reducing average word error rates by 11.2% and 8.2%, respectively, while providing structured expert usage and improved robustness to diverse data.
Problem

Research questions and friction points this paper is trying to address.

Improving expert cooperation in Mixture-of-Experts for ASR
Reducing word error rates in speech recognition models
Enhancing robustness and specialization in sparse MoE architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shared router across MoE layers
Enhances expert cooperation and specialization
Reduces word error rates significantly
πŸ”Ž Similar Papers
No similar papers found.