🤖 AI Summary
Existing LLM pruning methods primarily optimize for general language generation capabilities, often neglecting task-specific performance preservation. To address this, we propose a task-aware pruning framework that—uniquely—incorporates task-specific feature distributions into parameter importance estimation. Parameters are decomposed into shared and task-exclusive groups, and importance scores are fused via activation norm differences between general calibration data and task-specific data. Our method is compatible with mainstream pruning techniques—including magnitude-based pruning, activation-based pruning, and loss perturbation analysis—without requiring fine-tuning or retraining. Evaluated across diverse domain benchmarks (e.g., BoolQ, RTE, SST-2), it achieves significant accuracy gains (+2.1% average) at equivalent sparsity levels, while preserving general-generation quality. Results demonstrate the framework’s effectiveness, cross-task generalizability, and plug-and-play applicability.
📝 Abstract
Pruning provides a practical solution to reduce the resources required to run large language models (LLMs) to benefit from their effective capabilities as well as control their cost for training and inference. Research on LLM pruning often ranks the importance of LLM parameters using their magnitudes and calibration-data activations and removes (or masks) the less important ones, accordingly reducing LLMs' size. However, these approaches primarily focus on preserving the LLM's ability to generate fluent sentences, while neglecting performance on specific domains and tasks. In this paper, we propose a simple yet effective pruning approach for LLMs that preserves task-specific capabilities while shrinking their parameter space. We first analyze how conventional pruning minimizes loss perturbation under general-domain calibration and extend this formulation by incorporating task-specific feature distributions into the importance computation of existing pruning algorithms. Thus, our framework computes separate importance scores using both general and task-specific calibration data, partitions parameters into shared and exclusive groups based on activation-norm differences, and then fuses their scores to guide the pruning process. This design enables our method to integrate seamlessly with various foundation pruning techniques and preserve the LLM's specialized abilities under compression. Experiments on widely used benchmarks demonstrate that our approach is effective and consistently outperforms the baselines with identical pruning ratios and different settings.