MoST: Mixing Speech and Text with Modality-Aware Mixture of Experts

๐Ÿ“… 2026-01-15
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitation of existing multimodal large language models that employ shared parameters for speech and text, thereby neglecting their inherent representational differences and impairing modality-specific learning. To overcome this, the authors propose MoST, a novel modality-aware mixture-of-experts (MAMoE) architecture that assigns dedicated expert pathways for speech and text while incorporating shared experts to facilitate cross-modal fusion. As the first fully open-source speech-text mixture-of-experts large language model, MoST integrates modality-aware routing, post-training on open-source data, speech-text instruction tuning, and efficient alignment techniques. It achieves significant performance gains over comparable-scale models across diverse tasksโ€”including automatic speech recognition, text-to-speech synthesis, audio language modeling, and spoken question answering. Ablation studies further confirm the effectiveness of both modality-specific routing and the shared expert mechanism.

Technology Category

Application Category

๐Ÿ“ Abstract
We present MoST (Mixture of Speech and Text), a novel multimodal large language model that seamlessly integrates speech and text processing through our proposed Modality-Aware Mixture of Experts (MAMoE) architecture. While current multimodal models typically process diverse modality representations with identical parameters, disregarding their inherent representational differences, we introduce specialized routing pathways that direct tokens to modality-appropriate experts based on input type. MAMoE simultaneously enhances modality-specific learning and cross-modal understanding through two complementary components: modality-specific expert groups that capture domain-specific patterns and shared experts that facilitate information transfer between modalities. Building on this architecture, we develop an efficient transformation pipeline that adapts the pretrained MoE language model through strategic post-training on ASR and TTS datasets, followed by fine-tuning with a carefully curated speech-text instruction dataset. A key feature of this pipeline is that it relies exclusively on fully accessible, open-source datasets to achieve strong performance and data efficiency. Comprehensive evaluations across ASR, TTS, audio language modeling, and spoken question answering benchmarks show that MoST consistently outperforms existing models of comparable parameter counts. Our ablation studies confirm that the modality-specific routing mechanism and shared experts design significantly contribute to performance gains across all tested domains. To our knowledge, MoST represents the first fully open-source speech-text LLM built on a Mixture of Experts architecture. \footnote{We release MoST model, training code, inference code, and training data at https://github.com/NUS-HPC-AI-Lab/MoST
Problem

Research questions and friction points this paper is trying to address.

multimodal large language model
speech-text integration
modality-specific representation
Mixture of Experts
cross-modal understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Experts
Multimodal LLM
Modality-Aware Routing
Speech-Text Integration
Open-Source Training
๐Ÿ”Ž Similar Papers
No similar papers found.