PGB: One-Shot Pruning for BERT via Weight Grouping and Permutation

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational latency and memory footprint of large language models (e.g., BERT) during inference, this paper proposes PGB—a fine-tuning-free, one-shot semi-structured pruning method. Methodologically, PGB introduces three key innovations: (1) a novel weight-permutation-based grouping mechanism that precisely identifies salient substructures in multi-head attention and feed-forward layers; (2) joint optimization of intra-layer structured pruning and inter-layer full-layer removal; and (3) hierarchical adaptive sparsity allocation, enabling superior performance under high sparsity regimes. Evaluated on BERT<sub>BASE</sub>, PGB achieves substantial reductions in both computational cost and model size while preserving higher accuracy than state-of-the-art structured pruning approaches. The method thus offers an effective and practical solution for efficient transformer inference without requiring retraining or architectural modifications.

Technology Category

Application Category

📝 Abstract
Large pretrained language models such as BERT suffer from slow inference and high memory usage, due to their huge size. Recent approaches to compressing BERT rely on iterative pruning and knowledge distillation, which, however, are often too complicated and computationally intensive. This paper proposes a novel semi-structured one-shot pruning method for BERT, called $ extit{Permutation and Grouping for BERT}$ (PGB), which achieves high compression efficiency and sparsity while preserving accuracy. To this end, PGB identifies important groups of individual weights by permutation and prunes all other weights as a structure in both multi-head attention and feed-forward layers. Furthermore, if no important group is formed in a particular layer, PGB drops the entire layer to produce an even more compact model. Our experimental results on BERT$_{ ext{BASE}}$ demonstrate that PGB outperforms the state-of-the-art structured pruning methods in terms of computational cost and accuracy preservation.
Problem

Research questions and friction points this paper is trying to address.

Compresses BERT model size
Reduces inference time and memory
Maintains model accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-shot pruning method
Weight grouping and permutation
Layer dropping for compactness
🔎 Similar Papers
No similar papers found.