🤖 AI Summary
This work addresses the memory bottleneck in deploying large-scale Mixture-of-Experts (MoE) language models caused by their enormous parameter counts. The authors propose REAM, a novel compression method that groups experts based on router weights and merges their parameters, departing from conventional pruning strategies. This approach achieves substantial model compression while better preserving original performance. By integrating calibration data spanning general, mathematical, and code domains, REAM effectively balances the trade-off between multiple-choice and generative tasks. Evaluated across multiple benchmarks, REAM outperforms baseline methods such as REAP and closely matches the performance of the original uncompressed model in most scenarios, accurately tracing the Pareto frontier of the compression–performance trade-off.
📝 Abstract
Mixture-of-Experts (MoE) large language models (LLMs) are among the top-performing architectures. The largest models, often with hundreds of billions of parameters, pose significant memory challenges for deployment. Traditional approaches to reduce memory requirements include weight pruning and quantization. Motivated by the Router-weighted Expert Activation Pruning (REAP) that prunes experts, we propose a novel method, Router-weighted Expert Activation Merging (REAM). Instead of removing experts, REAM groups them and merges their weights, better preserving original performance. We evaluate REAM against REAP and other baselines across multiple MoE LLMs on diverse multiple-choice (MC) question answering and generative (GEN) benchmarks. Our results reveal a trade-off between MC and GEN performance that depends on the mix of calibration data. By controlling the mix of general, math and coding data, we examine the Pareto frontier of this trade-off and show that REAM often outperforms the baselines and in many cases is comparable to the original uncompressed models.