Effective MoE-based LLM Compression by Exploiting Heterogeneous Inter-Group Experts Routing Frequency and Information Density

๐Ÿ“… 2026-02-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the high memory overhead of Mixture-of-Experts (MoE) large language models, which stems from storing numerous expert networks and hinders practical deployment. The authors propose RFID-MoE, a novel framework that, for the first time, jointly leverages expert activation frequency and effective rank to construct an importance metric. This enables non-uniform SVD-based rank allocation guided by information density, followed by a sparse projection mechanism to efficiently reconstruct compression residuals and recover critical information. Evaluated on Qwen3-30B, the method achieves a 60% compression rate while reducing PTB perplexity to 16.92โ€”over 8.0 points lower than the baselineโ€”and improves zero-shot accuracy on HellaSwag by approximately 8%, substantially outperforming existing compression approaches.

Technology Category

Application Category

๐Ÿ“ Abstract
Mixture-of-Experts (MoE) based Large Language Models (LLMs) have achieved superior performance, yet the massive memory overhead caused by storing multiple expert networks severely hinders their practical deployment. Singular Value Decomposition (SVD)-based compression has emerged as a promising post-training technique; however, most existing methods apply uniform rank allocation or rely solely on static weight properties. This overlooks the substantial heterogeneity in expert utilization observed in MoE models, where frequent routing patterns and intrinsic information density vary significantly across experts. In this work, we propose RFID-MoE, an effective framework for MoE compression by exploiting heterogeneous Routing Frequency and Information Density. We first introduce a fused metric that combines expert activation frequency with effective rank to measure expert importance, adaptively allocating higher ranks to critical expert groups under a fixed budget. Moreover, instead of discarding compression residuals, we reconstruct them via a parameter-efficient sparse projection mechanism to recover lost information with minimal parameter overhead. Extensive experiments on representative MoE LLMs (e.g., Qwen3, DeepSeekMoE) across multiple compression ratios demonstrate that RFID-MoE consistently outperforms state-of-the-art methods like MoBE and D2-MoE. Notably, RFID-MoE achieves a perplexity of 16.92 on PTB with the Qwen3-30B model at a 60% compression ratio, reducing perplexity by over 8.0 compared to baselines, and improves zero-shot accuracy on HellaSwag by approximately 8%.
Problem

Research questions and friction points this paper is trying to address.

Mixture-of-Experts
LLM compression
memory overhead
expert heterogeneity
routing frequency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Experts
SVD compression
routing frequency
information density
residual reconstruction
๐Ÿ”Ž Similar Papers
No similar papers found.