🤖 AI Summary
To address excessive memory overhead when deploying large multimodal Mixture-of-Experts (MoE) models on edge devices, this paper proposes a sparsely activated ternary MoE architecture. Our method introduces three key innovations: (1) a novel {-1, 0, 1} ternary routing mechanism that replaces a few high-precision experts with more low-precision ones; (2) reuse of pretrained feed-forward networks (FFNs) as shared experts, enabling parameter-efficient upsampling and joint memory compression; and (3) integration of sparse routing, expert sharing, and post-training quantization. Under a strict 3.4 GB expert memory budget, our approach achieves a 4.3% average accuracy gain over full-precision MoE-LLaVA across multimodal tasks, while substantially reducing GPU memory footprint—demonstrating a favorable trade-off between inference accuracy and resource efficiency.
📝 Abstract
Large multimodal Mixture-of-Experts (MoEs) effectively scale the model size to boost performance while maintaining fixed active parameters. However, previous works primarily utilized full-precision experts during sparse up-cycling. Despite they show superior performance on end tasks, the large amount of experts introduces higher memory footprint, which poses significant challenges for the deployment on edge devices. In this work, we propose MoTE, a scalable and memory-efficient approach to train Mixture-of-Ternary-Experts models from dense checkpoint. Instead of training fewer high-precision experts, we propose to train more low-precision experts during up-cycling. Specifically, we use the pre-trained FFN as a shared expert and train ternary routed experts with parameters in {-1, 0, 1}. Extensive experiments show that our approach has promising scaling trend along model size. MoTE achieves comparable performance to full-precision baseline MoE-LLaVA while offering lower memory footprint. Furthermore, our approach is compatible with post-training quantization methods and the advantage further amplifies when memory-constraint goes lower. Given the same amount of expert memory footprint of 3.4GB and combined with post-training quantization, MoTE outperforms MoE-LLaVA by a gain of 4.3% average accuracy on end tasks, demonstrating its effectiveness and potential for memory-constrained devices.