TraceNAS: Zero-shot LLM Pruning via Gradient Trace Correlation

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing structured pruning methods for large language models either neglect global structural dependencies or incur high computational costs due to training-aware procedures. This work proposes a training-free neural architecture search framework that constructs a scale-invariant, zero-shot proxy metric based on gradient trace correlation. This metric maintains strong alignment with the pretraining loss landscape while enabling joint optimization of non-uniform pruning structures across both model depth and width. To the best of our knowledge, this is the first approach to achieve globally aware yet highly efficient joint pruning in a zero-shot setting. Evaluated on Llama and Qwen model families, the method requires only 8.5 hours on a single GPU—reducing GPU-hours by an order of magnitude—while matching the performance of training-aware baselines.

Technology Category

Application Category

📝 Abstract
Structured pruning is essential for efficient deployment of Large Language Models (LLMs). The varying sensitivity of LLM sub-blocks to pruning necessitates the identification of optimal non-uniformly pruned models. Existing methods evaluate the importance of layers, attention heads, or weight channels in isolation. Such localized focus ignores the complex global structural dependencies that exist across the model. Training-aware structured pruning addresses global dependencies, but its computational cost can be just as expensive as post-pruning training. To alleviate the computational burden of training-aware pruning and capture global structural dependencies, we propose TraceNAS, a training-free Neural Architecture Search (NAS) framework that jointly explores structured pruning of LLM depth and width. TraceNAS identifies pruned models that maintain a high degree of loss landscape alignment with the pretrained model using a scale-invariant zero-shot proxy, effectively selecting models that exhibit maximal performance potential during post-pruning training. TraceNAS is highly efficient, enabling high-fidelity discovery of pruned models on a single GPU in 8.5 hours, yielding a 10$\times$ reduction in GPU-hours compared to training-aware methods. Evaluations on the Llama and Qwen families demonstrate that TraceNAS is competitive with training-aware baselines across commonsense and reasoning benchmarks.
Problem

Research questions and friction points this paper is trying to address.

structured pruning
Large Language Models
global structural dependencies
non-uniform pruning
computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-shot pruning
Gradient trace correlation
Training-free NAS
Structured pruning
Loss landscape alignment
🔎 Similar Papers
No similar papers found.