🤖 AI Summary
Deploying large language models (LLMs) in resource-constrained environments remains challenging due to their computational and memory demands, while existing pruning methods often discard critical knowledge embedded in removed parameters. To address this, we propose Manifold-aligned Knowledge Aggregation (MKA), a layer-merging compression method grounded in manifold learning and the information bottleneck principle. MKA quantifies inter-layer semantic similarity via manifold alignment, explicitly identifies functionally redundant layers, and merges them with knowledge-preserving aggregation—enabling knowledge transfer rather than parameter elimination. Notably, MKA is the first to incorporate manifold alignment into hierarchical knowledge distillation–based layer merging, departing fundamentally from conventional pruning paradigms. Evaluated on Llama3-8B, MKA achieves a 43.75% model size reduction with only a 2.82% drop in MMLU accuracy—substantially outperforming pruning baselines. Moreover, MKA synergizes effectively with quantization, further enhancing compression efficiency.
📝 Abstract
While large language models (LLMs) excel in many domains, their complexity and scale challenge deployment in resource-limited environments. Current compression techniques, such as parameter pruning, often fail to effectively utilize the knowledge from pruned parameters. To address these challenges, we propose Manifold-Based Knowledge Alignment and Layer Merging Compression (MKA), a novel approach that uses manifold learning and the Information Bottleneck (IB) measure to merge similar layers, reducing model size while preserving essential performance. We evaluate MKA on multiple benchmark datasets and various LLMs. Our findings show that MKA not only preserves model performance but also achieves substantial compression ratios, outperforming traditional pruning methods. Moreover, when coupled with quantization, MKA delivers even greater compression. Specifically, on the MMLU dataset using the Llama3-8B model, MKA achieves a compression ratio of 43.75% with a minimal performance decrease of only 2.82%. The proposed MKA method offers a resource-efficient and performance-preserving model compression technique for LLMs. We make our code available at https://github.com/SempraETY/Pruning-via-Merging