The Unseen Frontier: Pushing the Limits of LLM Sparsity with Surrogate-Free ADMM

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Structural pruning of large language models (LLMs) suffers from a sharp accuracy drop beyond 50–60% sparsity, largely due to reliance on proxy losses and suboptimal optimization. Method: This paper proposes Elsa—the first proxy-free, alternating direction method of multipliers (ADMM)-based constrained optimization framework directly applied to structured LLM pruning. Elsa jointly performs structured pruning and quantization without retraining or fine-tuning, enabling efficient sparse architecture discovery. Its scalable variant, Elsa-L, supports models up to 27B parameters. Contribution/Results: On LLaMA-2-7B, Elsa achieves 90% sparsity with a perplexity only 1/7.8 that of state-of-the-art methods, markedly improving accuracy retention. Moreover, Elsa provides theoretically guaranteed convergence—a novel formal assurance for ADMM-based LLM pruning.

Technology Category

Application Category

📝 Abstract
Neural network pruning is a promising technique to mitigate the excessive computational and memory requirements of large language models (LLMs). Despite its promise, however, progress in this area has diminished, as conventional methods are seemingly unable to surpass moderate sparsity levels (50-60%) without severely degrading model accuracy. This work breaks through the current impasse, presenting a principled and effective method called $ exttt{Elsa}$, which achieves extreme sparsity levels of up to 90% while retaining high model fidelity. This is done by identifying several limitations in current practice, all of which can be traced back to their reliance on a surrogate objective formulation. $ exttt{Elsa}$ tackles this issue directly and effectively via standard and well-established constrained optimization techniques based on ADMM. Our extensive experiments across a wide range of models and scales show that $ exttt{Elsa}$ achieves substantial improvements over existing methods; e.g., it achieves 7.8$ imes$ less perplexity than the best existing method on LLaMA-2-7B at 90% sparsity. Furthermore, we present $ exttt{Elsa}_{ ext{-L}}$, a quantized variant that scales to extremely large models (27B), and establish its theoretical convergence guarantees. These results highlight meaningful progress in advancing the frontier of LLM sparsity, while promising that significant opportunities for further advancement may remain in directions that have so far attracted limited exploration.
Problem

Research questions and friction points this paper is trying to address.

Achieving extreme sparsity in LLMs without accuracy degradation
Overcoming limitations of surrogate objectives in neural pruning
Developing constrained optimization methods for model compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses ADMM for extreme sparsity without surrogates
Achieves 90% sparsity while maintaining model accuracy
Introduces quantized variant for scaling to large models
🔎 Similar Papers
No similar papers found.