Diet Your LLM: Dimension-wise Global Pruning of LLMs via Merging Task-specific Importance Score

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deploying large language models faces significant computational overhead, yet existing structured pruning methods either lack task adaptability or incur prohibitive training costs. This work proposes DIET, the first training-free, task-aware, dimension-level global structured pruning approach. DIET constructs a unified pruning mask by fusing multi-task importance scores derived from activation magnitudes of merely 100 samples per task, combined with a majority voting mechanism. Evaluated on Gemma-2 (2B/9B) across seven zero-shot benchmarks, DIET achieves an average accuracy improvement of nearly 10% over the current state-of-the-art at 20% sparsity, effectively balancing model efficiency and performance without any fine-tuning.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated remarkable capabilities, but their massive scale poses significant challenges for practical deployment. Structured pruning offers a promising solution by removing entire dimensions or layers, yet existing methods face critical trade-offs: task-agnostic approaches cannot adapt to task-specific requirements, while task-aware methods require costly training to learn task adaptability. We propose DIET (Dimension-wise global pruning of LLMs via merging Task-wise importance scores), a training-free structured pruning method that combines dimension-level granularity with task-aware selection. DIET profiles activation magnitudes across tasks using only 100 samples per task, then applies majority voting to construct a single global mask. DIET does not require large costs from pre-computation or training. Experiments on seven zero-shot benchmarks using Gemma-2 2B and 9B models demonstrate the effectiveness of DIET; for example, at 20% sparsity on Gemma-2 2B, DIET achieves near 10% average accuracy improvement, compared to previous state-of-the-art structured pruning methods. This advantage persists across various sparsity levels and model scales, positioning DIET as a practical and robust choice for structured LLM pruning.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Structured Pruning
Task-specific Adaptation
Model Compression
Deployment Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

structured pruning
task-aware
training-free
dimension-wise
LLM compression
🔎 Similar Papers
No similar papers found.