LIBMoE: A Library for comprehensive benchmarking Mixture of Experts in Large Language Models

📅 2024-11-01
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Mixture-of-Experts (MoE) architectures have become a key scaling paradigm for large language models (e.g., DeepSeek-V3, Llama-4), yet their prohibitive training and evaluation costs hinder systematic research. Method: We introduce LibMoE—the first open-source benchmark library for LLM-MoE—designed modularly, optimized for efficiency, and supporting comprehensive evaluation. Built on PyTorch, it implements Top-K routing, gradient sparsity, distributed optimization, and multi-dimensional metrics (e.g., accuracy, throughput, load balancing). Contribution/Results: LibMoE enables the first unified zero-shot benchmarking of five state-of-the-art MoE algorithms across three LLM families and eleven datasets. Our evaluation reveals convergent cross-task performance among leading methods. By significantly lowering entry barriers, LibMoE enhances reproducibility, extensibility, and standardization in MoE research.

Technology Category

Application Category

📝 Abstract
Mixture of Experts (MoEs) plays an important role in the development of more efficient and effective large language models (LLMs). Due to the enormous resource requirements, studying large scale MoE algorithms remain in-accessible to many researchers. This work develops emph{LibMoE}, a comprehensive and modular framework to streamline the research, training, and evaluation of MoE algorithms. Built upon three core principles: (i) modular design, (ii) efficient training; (iii) comprehensive evaluation, LibMoE brings MoE in LLMs more accessible to a wide range of researchers by standardizing the training and evaluation pipelines. Using LibMoE, we extensively benchmarked five state-of-the-art MoE algorithms over three different LLMs and 11 datasets under the zero-shot setting. The results show that despite the unique characteristics, all MoE algorithms perform roughly similar when averaged across a wide range of tasks. With the modular design and extensive evaluation, we believe LibMoE will be invaluable for researchers to make meaningful progress towards the next generation of MoE and LLMs. Project page: url{https://fsoft-aic.github.io/fsoft-LibMoE.github.io}.
Problem

Research questions and friction points this paper is trying to address.

Systematic MoE research limited by high computational costs
LibMoE enables reproducible efficient extensible MoE benchmarking
Analyzes routing dynamics initialization effects training regimes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for reproducible and efficient MoE research
Transparent analytical tools for probing routing dynamics
Comprehensive analysis of routing patterns and training regimes
🔎 Similar Papers
No similar papers found.
N
Nam V. Nguyen
FPT Software AI Center, Viet Nam
Thong T. Doan
Thong T. Doan
AI Resident at FSoft AI Center
Multimodal LearningLarge Language Model
L
Luong Tran
FPT Software AI Center, Viet Nam
V
Van Nguyen
FPT Software AI Center, Viet Nam
Quang Pham
Quang Pham
Salesforce AI Research