Thanos: A Block-wise Pruning Algorithm for Efficient Large Language Model Compression

📅 2025-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high memory consumption and computational overhead in large language model (LLM) inference, this paper proposes Thanos—a hardware-friendly, block-wise structured pruning algorithm. Methodologically, Thanos introduces the first adaptive block-level masking mechanism, enabling dynamic *n:m* sparsity patterns while preserving weight block locality, fine-grained importance awareness, and hardware-aligned sparse structure generation. It integrates block-wise sparsity modeling, adaptive importance estimation, and hardware-aware optimization to balance flexibility and deployment efficiency. Experimental results demonstrate that Thanos achieves state-of-the-art performance among structured pruning methods across multiple LLMs, with significantly lower accuracy degradation than existing structured approaches—and even outperforms mainstream unstructured pruning techniques. On typical GPUs, it delivers up to 2.3× inference speedup and 45% reduction in GPU memory usage.

Technology Category

Application Category

📝 Abstract
This paper presents Thanos, a novel weight-pruning algorithm designed to reduce the memory footprint and enhance the computational efficiency of large language models (LLMs) by removing redundant weights while maintaining accuracy. Thanos introduces a block-wise pruning strategy with adaptive masks that dynamically adjust to weight importance, enabling flexible sparsity patterns and structured formats, such as $n:m$ sparsity, optimized for hardware acceleration. Experimental evaluations demonstrate that Thanos achieves state-of-the-art performance in structured pruning and outperforms existing methods in unstructured pruning. By providing an efficient and adaptable approach to model compression, Thanos offers a practical solution for deploying large models in resource-constrained environments.
Problem

Research questions and friction points this paper is trying to address.

Reduces memory footprint of large language models
Enhances computational efficiency via block-wise pruning
Maintains accuracy while removing redundant weights
Innovation

Methods, ideas, or system contributions that make the work stand out.

Block-wise pruning strategy for LLM compression
Adaptive masks enable flexible sparsity patterns
Optimized for hardware acceleration efficiency
🔎 Similar Papers
No similar papers found.
I
Ivan Ilin
GenAI Center of Excellence, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
Peter Richtarik
Peter Richtarik
Professor, KAUST
optimizationmachine learningfederated learningdeep learningcomputer science