🤖 AI Summary
Deploying large language models faces significant computational overhead, yet existing structured pruning methods either lack task adaptability or incur prohibitive training costs. This work proposes DIET, the first training-free, task-aware, dimension-level global structured pruning approach. DIET constructs a unified pruning mask by fusing multi-task importance scores derived from activation magnitudes of merely 100 samples per task, combined with a majority voting mechanism. Evaluated on Gemma-2 (2B/9B) across seven zero-shot benchmarks, DIET achieves an average accuracy improvement of nearly 10% over the current state-of-the-art at 20% sparsity, effectively balancing model efficiency and performance without any fine-tuning.
📝 Abstract
Large language models (LLMs) have demonstrated remarkable capabilities, but their massive scale poses significant challenges for practical deployment. Structured pruning offers a promising solution by removing entire dimensions or layers, yet existing methods face critical trade-offs: task-agnostic approaches cannot adapt to task-specific requirements, while task-aware methods require costly training to learn task adaptability. We propose DIET (Dimension-wise global pruning of LLMs via merging Task-wise importance scores), a training-free structured pruning method that combines dimension-level granularity with task-aware selection. DIET profiles activation magnitudes across tasks using only 100 samples per task, then applies majority voting to construct a single global mask. DIET does not require large costs from pre-computation or training. Experiments on seven zero-shot benchmarks using Gemma-2 2B and 9B models demonstrate the effectiveness of DIET; for example, at 20% sparsity on Gemma-2 2B, DIET achieves near 10% average accuracy improvement, compared to previous state-of-the-art structured pruning methods. This advantage persists across various sparsity levels and model scales, positioning DIET as a practical and robust choice for structured LLM pruning.