MaskPrune: Mask-based LLM Pruning for Layer-wise Uniform Structures

๐Ÿ“… 2025-02-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) suffer from inefficient deployment and inference due to their ever-increasing scale; existing structured pruning methods compromise inter-layer structural consistency to preserve accuracy, hindering hardware acceleration and efficient post-pruning fine-tuning. Method: We propose a minimax-optimized mask learning paradigm for structured pruningโ€”first achieving uniform layer-wise width reduction. Our approach jointly optimizes pruning masks via inter-layer structural constraints and sparsity-inducing regularization. Contribution/Results: The method simultaneously ensures high accuracy, hardware compatibility, and differentiability for downstream adaptation. On multiple benchmarks, it significantly outperforms state-of-the-art methods. At equivalent accuracy, it achieves substantial inference throughput gains and enables efficient fine-tuning. This work establishes a theoretically rigorous and engineering-practical pathway for LLM lightweighting and deployment.

Technology Category

Application Category

๐Ÿ“ Abstract
The remarkable performance of large language models (LLMs) in various language tasks has attracted considerable attention. However, the ever-increasing size of these models presents growing challenges for deployment and inference. Structured pruning, an effective model compression technique, is gaining increasing attention due to its ability to enhance inference efficiency. Nevertheless, most previous optimization-based structured pruning methods sacrifice the uniform structure across layers for greater flexibility to maintain performance. The heterogeneous structure hinders the effective utilization of off-the-shelf inference acceleration techniques and impedes efficient configuration for continued training. To address this issue, we propose a novel masking learning paradigm based on minimax optimization to obtain the uniform pruned structure by optimizing the masks under sparsity regularization. Extensive experimental results demonstrate that our method can maintain high performance while ensuring the uniformity of the pruned model structure, thereby outperforming existing SOTA methods.
Problem

Research questions and friction points this paper is trying to address.

Optimize layer-wise uniform pruning
Maintain model performance consistently
Enhance inference efficiency effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mask-based pruning technique
Minimax optimization for uniformity
Sparsity regularization in masks
๐Ÿ”Ž Similar Papers
No similar papers found.