PermLLM: Learnable Channel Permutation for N:M Sparse Large Language Models

๐Ÿ“… 2025-10-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Traditional channel permutation relies on handcrafted quality metrics, making it difficult to accurately model the impact of pruning on model performance. This paper proposes PermLLM, a learnable channel reordering framework specifically designed for N:M sparse large language models (LLMs). Methodologically, it introduces a differentiable soft permutation matrix regularized via Sinkhorn normalization, enabling end-to-end optimization; incorporates a block-wise channel reordering strategy to balance efficiency and accuracy; and seamlessly integrates with mainstream one-shot pruning methods. Extensive experiments on LLaMA, Qwen, and OPT families demonstrate that PermLLM significantly mitigates accuracy degradation induced by pruningโ€”yielding average improvements of 0.8โ€“2.3 BLEU/ACC points at identical sparsity levels. To the best of our knowledge, this is the first work to achieve adaptive, learnable channel reordering tailored for N:M sparse structures.

Technology Category

Application Category

๐Ÿ“ Abstract
Channel permutation is a powerful technique for enhancing the accuracy of N:M sparse models by reordering the channels of weight matrices to prioritize the retention of important weights. However, traditional channel permutation methods rely on handcrafted quality metrics, which often fail to accurately capture the true impact of pruning on model performance. To address this limitation, we propose PermLLM, a novel post-training pruning framework that introduces learnable channel permutation (LCP) for N:M sparsity. LCP leverages Sinkhorn normalization to transform discrete permutation matrices into differentiable soft permutation matrices, enabling end-to-end optimization. Additionally, PermLLM incorporates an efficient block-wise channel permutation strategy, which significantly reduces the number of learnable parameters and computational complexity. PermLLM seamlessly integrates with existing one-shot pruning methods to adaptively optimize channel permutations, effectively mitigating pruning-induced errors. Extensive experiments on the LLaMA series, Qwen, and OPT models demonstrate that PermLLM achieves superior performance in optimizing N:M sparse models. The code is available at https://github.com/lanchengzou/PermLLM.
Problem

Research questions and friction points this paper is trying to address.

Optimizing channel permutation to enhance N:M sparse model accuracy
Replacing handcrafted metrics with learnable permutation matrices
Reducing computational complexity through block-wise permutation strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable channel permutation for N:M sparsity
Sinkhorn normalization enables differentiable soft permutations
Block-wise strategy reduces parameters and computational complexity
๐Ÿ”Ž Similar Papers
No similar papers found.