SparseSwaps: Tractable LLM Pruning Mask Refinement at Scale

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges in large language model (LLM) pruning—namely, intractable layer-wise mask optimization, scarcity of calibration data, and prohibitively large combinatorial search spaces rendering integer programming (IP) infeasible—this paper proposes an efficient and scalable 1-swap mask fine-tuning method. Our key contributions are threefold: (i) we derive, for the first time, a closed-form optimal solution for 1-swap updates based on the Gram matrix; (ii) we decouple row-wise constraints to enable GPU-native parallelism and eliminate hyperparameter tuning, facilitating large-scale Transformer adaptation; and (iii) we integrate uniform sparsity constraints with IP-inspired dimensionality reduction to avoid full retraining. Evaluated on GPT-family models, our method reduces per-layer pruning error by up to 60% compared to Wanda, while significantly improving perplexity and zero-shot accuracy.

Technology Category

Application Category

📝 Abstract
The resource requirements of Neural Networks can be significantly reduced through pruning -- the removal of seemingly less important parameters. However, with the rise of Large Language Models (LLMs), full retraining to recover pruning-induced performance degradation is often prohibitive and classical approaches such as global magnitude pruning are suboptimal on Transformer architectures. State-of-the-art methods hence solve a layer-wise mask selection problem, the problem of finding a pruning mask which minimizes the per-layer pruning error on a small set of calibration data. Exactly solving this problem to optimality using Integer Programming (IP) solvers is computationally infeasible due to its combinatorial nature and the size of the search space, and existing approaches therefore rely on approximations or heuristics. In this work, we demonstrate that the mask selection problem can be made drastically more tractable at LLM scale. To that end, we decouple the rows by enforcing equal sparsity levels per row. This allows us to derive optimal 1-swaps (exchanging one kept and one pruned weight) that can be computed efficiently using the Gram matrix of the calibration data. Using these observations, we propose a tractable and simple 1-swap algorithm that warm starts from any pruning mask, runs efficiently on GPUs at LLM scale, and is essentially hyperparameter-free. We demonstrate that our approach reduces per-layer pruning error by up to 60% over Wanda (Sun et al., 2023) and consistently improves perplexity and zero-shot accuracy across state-of-the-art GPT architectures.
Problem

Research questions and friction points this paper is trying to address.

Reduces pruning error in large language models efficiently
Improves model performance without full retraining after pruning
Optimizes pruning masks using scalable 1-swap algorithm on GPUs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Row-wise sparsity constraint simplifies combinatorial problem
Efficient optimal 1-swap computation using Gram matrix
GPU-scalable hyperparameter-free mask refinement algorithm
Max Zimmer
Max Zimmer
Zuse Institute Berlin
Deep LearningOptimizationMathematics
Christophe Roux
Christophe Roux
TU Berlin, Zuse Institute Berlin
OptimizationMachine Learning
M
Moritz Wagner
Department for AI in Society, Science, and Technology, Zuse Institute Berlin, Germany; Institute of Mathematics, Technische Universität Berlin, Germany
Deborah Hendrych
Deborah Hendrych
PhD Student at Zuse Institute Berlin
Optimization
S
S. Pokutta
Department for AI in Society, Science, and Technology, Zuse Institute Berlin, Germany; Institute of Mathematics, Technische Universität Berlin, Germany