Iterative Structured Pruning for Large Language Models with Multi-Domain Calibration

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of deploying large language models (LLMs), which suffer from high computational costs, substantial memory consumption, and significant inference latency due to their massive parameter counts. While existing unstructured pruning methods induce irregular sparsity that relies on specialized software or hardware, this paper proposes a structured pruning framework tailored for LLMs. By leveraging multi-domain mixed calibration data and an iterative channel pruning strategy, the method accurately identifies and removes redundant channels. It achieves high compression ratios and enhanced generalization while maintaining compatibility with standard hardware. Extensive experiments across diverse models and downstream tasks demonstrate that the approach enables highly efficient model compression with minimal performance degradation, confirming its effectiveness and broad applicability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved remarkable success across a wide spectrum of natural language processing tasks. However, their ever-growing scale introduces significant barriers to real-world deployment, including substantial computational overhead, memory footprint, and inference latency. While model pruning presents a viable solution to these challenges, existing unstructured pruning techniques often yield irregular sparsity patterns that necessitate specialized hardware or software support. In this work, we explore structured pruning, which eliminates entire architectural components and maintains compatibility with standard hardware accelerators. We introduce a novel structured pruning framework that leverages a hybrid multi-domain calibration set and an iterative calibration strategy to effectively identify and remove redundant channels. Extensive experiments on various models across diverse downstream tasks show that our approach achieves significant compression with minimal performance degradation.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Model Pruning
Structured Pruning
Deployment Efficiency
Computational Overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

structured pruning
multi-domain calibration
iterative calibration
large language models
model compression
🔎 Similar Papers
No similar papers found.