๐ค AI Summary
Large language models (LLMs) suffer from inefficient deployment and inference due to their ever-increasing scale; existing structured pruning methods compromise inter-layer structural consistency to preserve accuracy, hindering hardware acceleration and efficient post-pruning fine-tuning.
Method: We propose a minimax-optimized mask learning paradigm for structured pruningโfirst achieving uniform layer-wise width reduction. Our approach jointly optimizes pruning masks via inter-layer structural constraints and sparsity-inducing regularization.
Contribution/Results: The method simultaneously ensures high accuracy, hardware compatibility, and differentiability for downstream adaptation. On multiple benchmarks, it significantly outperforms state-of-the-art methods. At equivalent accuracy, it achieves substantial inference throughput gains and enables efficient fine-tuning. This work establishes a theoretically rigorous and engineering-practical pathway for LLM lightweighting and deployment.
๐ Abstract
The remarkable performance of large language models (LLMs) in various language tasks has attracted considerable attention. However, the ever-increasing size of these models presents growing challenges for deployment and inference. Structured pruning, an effective model compression technique, is gaining increasing attention due to its ability to enhance inference efficiency. Nevertheless, most previous optimization-based structured pruning methods sacrifice the uniform structure across layers for greater flexibility to maintain performance. The heterogeneous structure hinders the effective utilization of off-the-shelf inference acceleration techniques and impedes efficient configuration for continued training. To address this issue, we propose a novel masking learning paradigm based on minimax optimization to obtain the uniform pruned structure by optimizing the masks under sparsity regularization. Extensive experimental results demonstrate that our method can maintain high performance while ensuring the uniformity of the pruned model structure, thereby outperforming existing SOTA methods.