Learnable Permutation for Structured Sparsity on Transformer Models

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of structured pruning in Transformers, which are constrained by the fixed arrangement of weight channels. Existing approaches resort to suboptimal heuristic strategies due to the intractably large search space for optimal permutations. To overcome this, we propose the first end-to-end learnable framework for weight permutation that integrates a learnable permutation cost matrix, a differentiable bipartite matching solver, and a sparsity-aware loss function tailored for structured pruning. By embedding this combinatorial optimization directly into the training pipeline, our method jointly optimizes channel permutation and pruning decisions. Experiments demonstrate consistent and significant improvements in post-pruning accuracy across both vision and language Transformer models, establishing a new state of the art in permutation-aware structured pruning.

Technology Category

Application Category

📝 Abstract
Structured sparsity has emerged as a popular model pruning technique, widely adopted in various architectures, including CNNs, Transformer models, and especially large language models (LLMs) in recent years. A promising direction to further improve post-pruning performance is weight permutation, which reorders model weights into patterns more amenable to pruning. However, the exponential growth of the permutation search space with the scale of Transformer architectures forces most methods to rely on greedy or heuristic algorithms, limiting the effectiveness of reordering. In this work, we propose a novel end-to-end learnable permutation framework. Our method introduces a learnable permutation cost matrix to quantify the cost of swapping any two input channels of a given weight matrix, a differentiable bipartite matching solver to obtain the optimal binary permutation matrix given a cost matrix, and a sparsity optimization loss function to directly optimize the permutation operator. We extensively validate our approach on vision and language Transformers, demonstrating that our method achieves state-of-the-art permutation results for structured sparsity.
Problem

Research questions and friction points this paper is trying to address.

structured sparsity
weight permutation
Transformer models
pruning
search space
Innovation

Methods, ideas, or system contributions that make the work stand out.

learnable permutation
structured sparsity
differentiable bipartite matching
end-to-end optimization
Transformer pruning
🔎 Similar Papers
No similar papers found.