🤖 AI Summary
To address the challenges of large model size and high inference latency in large language model (LLM) deployment—alongside the limitations of existing post-training pruning methods, which either require fine-tuning or incur substantial performance degradation—this paper proposes a retraining-free joint pruning framework. It innovatively integrates weight update magnitude with neuron activation patterns to dynamically identify redundant parameters, enabling both structured and unstructured pruning. The method is model-agnostic, computationally efficient, and straightforward to implement. Extensive evaluation on mainstream LLMs—including LLaMA-2, LLaMA-3, and OPT—demonstrates state-of-the-art performance across multiple standard language understanding and generation benchmarks: it achieves the lowest perplexity and highest zero-shot accuracy, significantly outperforming existing pruning approaches that rely on extensive weight updates.
📝 Abstract
Large language models (LLMs) have rapidly advanced in recent years, achieving remarkable performance across a wide range of natural language processing tasks. However, this progress has come at the cost of increasingly large model sizes, which pose significant challenges for deployment, scalability, and energy efficiency. To address these limitations, post-training pruning has emerged as a promising approach for reducing model size and inference latency without the need for retraining. Despite these advantages, many existing pruning methods result in substantial performance degradation or require computationally expensive fine-tuning. In this work, we introduce Z-Pruner, a novel post-training pruning method designed to induce sparsity in pretrained LLMs without any retraining. Unlike conventional approaches, Z-Pruner leverages both weight update magnitudes and activation patterns to identify and eliminate redundant parameters more effectively. Our method is model-agnostic, efficient, and easy to implement. We evaluate Z-Pruner using multiple widely-used LLM architectures, including LLaMA-2, LLaMA-3, and OPT, across a diverse set of standard language benchmarks. Experimental results demonstrate that Z-Pruner surpasses state-of-the-art pruning methods that require intensive weight updates. Specifically, Z-Pruner achieves the lowest perplexity scores and the highest overall average score for zero-shot accuracy. We have made the corresponding codes publicly available at https://github.com/sazzadadib/Z-Pruner.