Z-Pruner: Post-Training Pruning of Large Language Models for Efficiency without Retraining

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of large model size and high inference latency in large language model (LLM) deployment—alongside the limitations of existing post-training pruning methods, which either require fine-tuning or incur substantial performance degradation—this paper proposes a retraining-free joint pruning framework. It innovatively integrates weight update magnitude with neuron activation patterns to dynamically identify redundant parameters, enabling both structured and unstructured pruning. The method is model-agnostic, computationally efficient, and straightforward to implement. Extensive evaluation on mainstream LLMs—including LLaMA-2, LLaMA-3, and OPT—demonstrates state-of-the-art performance across multiple standard language understanding and generation benchmarks: it achieves the lowest perplexity and highest zero-shot accuracy, significantly outperforming existing pruning approaches that rely on extensive weight updates.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have rapidly advanced in recent years, achieving remarkable performance across a wide range of natural language processing tasks. However, this progress has come at the cost of increasingly large model sizes, which pose significant challenges for deployment, scalability, and energy efficiency. To address these limitations, post-training pruning has emerged as a promising approach for reducing model size and inference latency without the need for retraining. Despite these advantages, many existing pruning methods result in substantial performance degradation or require computationally expensive fine-tuning. In this work, we introduce Z-Pruner, a novel post-training pruning method designed to induce sparsity in pretrained LLMs without any retraining. Unlike conventional approaches, Z-Pruner leverages both weight update magnitudes and activation patterns to identify and eliminate redundant parameters more effectively. Our method is model-agnostic, efficient, and easy to implement. We evaluate Z-Pruner using multiple widely-used LLM architectures, including LLaMA-2, LLaMA-3, and OPT, across a diverse set of standard language benchmarks. Experimental results demonstrate that Z-Pruner surpasses state-of-the-art pruning methods that require intensive weight updates. Specifically, Z-Pruner achieves the lowest perplexity scores and the highest overall average score for zero-shot accuracy. We have made the corresponding codes publicly available at https://github.com/sazzadadib/Z-Pruner.
Problem

Research questions and friction points this paper is trying to address.

Reducing LLM size without retraining for efficiency
Minimizing performance degradation in post-training pruning
Identifying redundant parameters using weight and activation patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-training pruning without retraining for efficiency
Leverages weight updates and activation patterns
Model-agnostic method inducing sparsity in LLMs
🔎 Similar Papers
No similar papers found.
S
Samiul Basir Bhuiyan
Department of Electrical and Computer Engineering, North South University, Dhaka, 1229, Bangladesh
M
Md. Sazzad Hossain Adib
Department of Electrical and Computer Engineering, North South University, Dhaka, 1229, Bangladesh
M
Mohammed Aman Bhuiyan
Department of Electrical and Computer Engineering, North South University, Dhaka, 1229, Bangladesh
Muhammad Rafsan Kabir
Muhammad Rafsan Kabir
Department of Electrical and Computer Engineering, North South University
machine learningnatural language processingcomputer vision
Moshiur Farazi
Moshiur Farazi
University of Doha for Science and Technology, Australian National University
Computer VisionVision-Language ModelsApplied AI
Shafin Rahman
Shafin Rahman
Associate Professor, ECE, North South University, Bangladesh
Computer VisionMachine Learning
Nabeel Mohammed
Nabeel Mohammed
North South University
Natural Language ProcessingComputer VisionDeep Learning