CAST: Continuous and Differentiable Semi-Structured Sparsity-Aware Training for Large Language Models

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Joint optimization of sparse structure and weights remains challenging for hardware-efficient LLM deployment. Method: This paper proposes CAST, a continuous differentiable semi-structured sparsity training framework that enables end-to-end differentiable training under N:M sparsity patterns for the first time. CAST integrates adaptive L1 regularization, sparsity-aware optimization, weight scaling compensation, and self-distillation. Contribution/Results: CAST achieves efficient sparsification across model scales from 125M to 13B parameters. On LLaMA2-7B, it attains 2:4 sparsity using only 2% of pretraining data, incurring merely +0.09 perplexity increase while improving zero-shot accuracy by +0.36%. It significantly reduces inference latency and memory footprint, and establishes a predictive scaling law for sparse model performance.

Technology Category

Application Category

📝 Abstract
Sparsity-aware training is an effective approach for transforming large language models (LLMs) into hardware-friendly sparse patterns, thereby reducing latency and memory consumption during inference. In this paper, we propose Continuous Adaptive Sparse Trainer (CAST), a fully continuous and differentiable sparsity-aware training framework for semi-structured (or "N:M") sparse models. Unlike previous approaches that optimize sparsity patterns and weights separately, CAST enables seamless joint optimization during training, while progressively transforming the model into the desired sparsity format. Specifically, CAST introduces three key components: 1) AdamS, a sparsity-aware optimizer that leverages adaptive L1 decay to promote uniform sparsification across all parameters; 2) Weight Scaling, a module designed to mitigate the magnitude reduction caused by decay while preserving desired sparsity patterns; 3) Knowledge Distillation, which employs the dense model as a self-teacher to enhance training efficiency. We evaluate CAST under 2:4 sparsity patterns across multiple model families, ranging from 125M to 13B parameters. Our results demonstrate significant improvements over previous state-of-the-art methods in both perplexity and zero-shot accuracy with minimal training resources. Notably, on LLaMA2-7B, our 2:4 sparse model achieves a negligible perplexity increase of 0.09 and a 0.36% gain in zero-shot accuracy compared to the dense model using only 2% of the original pretraining tokens. Additionally, we establish an accurate and robust empirical scaling law to predict sparse model performance given adequate training resources. Finally, we demonstrate the practical applicability of our sparse models by evaluating them under quantization and fine-tuning scenarios.
Problem

Research questions and friction points this paper is trying to address.

Reducing latency and memory consumption in large language models
Enabling joint optimization of sparsity patterns and model weights
Achieving minimal performance loss while maintaining hardware-friendly sparsity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous differentiable framework for semi-structured sparsity
AdamS optimizer with adaptive L1 decay mechanism
Weight scaling and knowledge distillation for efficiency
🔎 Similar Papers
No similar papers found.
Weiyu Huang
Weiyu Huang
Tsinghua Unviersity
Efficient ML
Yuezhou Hu
Yuezhou Hu
Tsinghua University
J
Jun Zhu
Department of Computer Science and Technology, Institute for AI, BNRist Center, THBI Lab, Tsinghua-Bosch Joint ML Center, Tsinghua University
Jianfei Chen
Jianfei Chen
Associate Professor, Tsinghua University
Machine Learning