🤖 AI Summary
To address excessive inference overhead caused by width redundancy in large language models (LLMs), this paper proposes a neuron merging framework grounded in discrete optimal transport (OT). By incorporating entropy regularization and low-rank matrix decomposition, the method enables structured neuron re-projection and weight redistribution within Transformer layers, effectively compressing model width while preserving salient semantic signals. Unlike conventional pruning approaches—which yield unstructured sparsity and often incur sharp performance degradation—our method ensures end-to-end differentiability and hardware-friendly deployment. Extensive experiments across major LLM families (e.g., Llama, OPT, BLOOM) and scales (1B–7B) demonstrate that our approach consistently outperforms diverse pruning baselines: it sustains over 98% of original task performance while reducing inference FLOPs and memory bandwidth requirements by 20–35%.
📝 Abstract
Model compression offers a promising path to reducing the cost and inaccessibility of large pre-trained models, without significantly compromising their impressive performance. Large Transformer models, including large language models (LLMs), often contain computational redundancy, which can serve as a target for new model compression methods. In this work, we specifically target neuron-level redundancies in model layers by combining groups of similar neurons into fewer neurons. We frame this width reduction as a Discrete Optimal Transport problem, and propose DOTResize, a novel Transformer compression method that uses optimal transport theory to transform and compress model weights. To ensure applicability within the Transformer architecture, we motivate and incorporate entropic regularization and matrix factorization into the transportation maps produced by our method. Unlike pruning-based approaches which discard neurons based on importance measures, DOTResize re-projects the entire neuron width, allowing the retention and redistribution of useful signal across the reduced layer. Empirical results show that compared to simple or state-of-the-art neuron width-pruning techniques, DOTResize can outperform these methods across multiple LLM families and sizes, while achieving measurable reductions in real-world computational cost.