๐ค AI Summary
Traditional channel permutation relies on handcrafted quality metrics, making it difficult to accurately model the impact of pruning on model performance. This paper proposes PermLLM, a learnable channel reordering framework specifically designed for N:M sparse large language models (LLMs). Methodologically, it introduces a differentiable soft permutation matrix regularized via Sinkhorn normalization, enabling end-to-end optimization; incorporates a block-wise channel reordering strategy to balance efficiency and accuracy; and seamlessly integrates with mainstream one-shot pruning methods. Extensive experiments on LLaMA, Qwen, and OPT families demonstrate that PermLLM significantly mitigates accuracy degradation induced by pruningโyielding average improvements of 0.8โ2.3 BLEU/ACC points at identical sparsity levels. To the best of our knowledge, this is the first work to achieve adaptive, learnable channel reordering tailored for N:M sparse structures.
๐ Abstract
Channel permutation is a powerful technique for enhancing the accuracy of N:M sparse models by reordering the channels of weight matrices to prioritize the retention of important weights. However, traditional channel permutation methods rely on handcrafted quality metrics, which often fail to accurately capture the true impact of pruning on model performance. To address this limitation, we propose PermLLM, a novel post-training pruning framework that introduces learnable channel permutation (LCP) for N:M sparsity. LCP leverages Sinkhorn normalization to transform discrete permutation matrices into differentiable soft permutation matrices, enabling end-to-end optimization. Additionally, PermLLM incorporates an efficient block-wise channel permutation strategy, which significantly reduces the number of learnable parameters and computational complexity. PermLLM seamlessly integrates with existing one-shot pruning methods to adaptively optimize channel permutations, effectively mitigating pruning-induced errors. Extensive experiments on the LLaMA series, Qwen, and OPT models demonstrate that PermLLM achieves superior performance in optimizing N:M sparse models. The code is available at https://github.com/lanchengzou/PermLLM.