IG-Pruning: Input-Guided Block Pruning for Large Language Models

πŸ“… 2025-11-04
πŸ›οΈ Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) incur substantial computational overhead during inference, and existing static deep pruning methods suffer from poor generalization across tasks and inputs. Method: This paper proposes an input-aware dynamic block pruning framework that avoids fixed pruning masks. Instead, it groups inputs via semantic clustering, employs L0 regularization to optimize diverse, fine-grained layer-wise structural masks, and selects optimal pruning configurations in real time using a lightweight input-guided mechanismβ€”all without additional training. Contribution/Results: The method significantly improves adaptability across diverse tasks and input distributions. Experiments demonstrate consistent superiority over state-of-the-art static pruning baselines on multiple benchmarks: it preserves model accuracy while substantially reducing FLOPs, enabling efficient deployment in resource-constrained environments.

Technology Category

Application Category

πŸ“ Abstract
With the growing computational demands of large language models (LLMs), efficient inference has become increasingly critical for practical deployment. Depth pruning has emerged as a promising approach for reducing the computational costs of large language models by removing transformer layers. However, existing methods typically rely on fixed block masks, which can lead to suboptimal performance across different tasks and inputs. In this paper, we propose IG-Pruning, a novel input-aware block-wise pruning method that dynamically selects layer masks at inference time. Our approach consists of two stages: (1) Discovering diverse mask candidates through semantic clustering and L0 optimization, and (2) Implementing efficient dynamic pruning without the need for extensive training. Experimental results demonstrate that our method consistently outperforms state-of-the-art static depth pruning methods, making it particularly suitable for resource-constrained deployment scenarios.
Problem

Research questions and friction points this paper is trying to address.

Dynamic layer mask selection for LLM inference optimization
Overcoming suboptimal performance of fixed block pruning methods
Reducing computational costs without extensive retraining requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic layer mask selection at inference time
Semantic clustering and L0 optimization for mask discovery
Efficient pruning without extensive training requirements
πŸ”Ž Similar Papers
No similar papers found.
K
Kangyu Qiao
Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS)
Shaolei Zhang
Shaolei Zhang
Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS)
Natural Language ProcessingLarge Language ModelMultimodal LLMsSimultaneous Translation
Y
Yang Feng
Key Laboratory of AI Safety, Chinese Academy of Sciences