EvoP: Robust LLM Inference via Evolutionary Pruning

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of existing large language model (LLM) pruning methods—namely, neglect of data characteristics and suboptimal structural optimization under resource constraints—this paper proposes the first evolutionary algorithm-based structured pruning framework. Our method integrates calibration-driven compression, structured pruning, and quantization co-optimization. Key contributions include: (1) Clustered Calibration Data Sampling (CCDS), which enhances calibration data representativeness via clustering-based selection; and (2) Evolutionary Pruning Pattern Search (EPPS), a novel strategy that jointly optimizes channel-wise pruning and sparse structure layout. Evaluated across multiple LLM architectures and downstream tasks, our approach achieves an average accuracy improvement of 2.3% at 4-bit quantization with 50% sparsity, while accelerating inference by 2.1×—significantly outperforming state-of-the-art pruning techniques.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved remarkable success in natural language processing tasks, but their massive size and computational demands hinder their deployment in resource-constrained environments. Existing structured pruning methods address this issue by removing redundant structures (e.g., elements, channels, layers) from the model. However, these methods employ a heuristic pruning strategy, which leads to suboptimal performance. Besides, they also ignore the data characteristics when pruning the model. To overcome these limitations, we propose EvoP, an evolutionary pruning framework for robust LLM inference. EvoP first presents a cluster-based calibration dataset sampling (CCDS) strategy for creating a more diverse calibration dataset. EvoP then introduces an evolutionary pruning pattern searching (EPPS) method to find the optimal pruning pattern. Compared to existing structured pruning techniques, EvoP achieves the best performance while maintaining the best efficiency. Experiments across different LLMs and different downstream tasks validate the effectiveness of the proposed EvoP, making it a practical and scalable solution for deploying LLMs in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Reduces LLM size for resource-constrained environments.
Improves pruning strategy beyond heuristic methods.
Incorporates data characteristics in model pruning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolutionary pruning framework
Cluster-based dataset sampling
Optimal pruning pattern search
🔎 Similar Papers