Unifying Uniform and Binary-coding Quantization for Accurate Compression of Large Language Models

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the substantial accuracy degradation in large language model (LLM) quantization, this paper proposes UniQuanF—a novel unified framework integrating uniform and binary quantization. Methodologically, it introduces: (1) a flexible, differentiable unified quantization model supporting non-uniform quantization levels and adaptive value-to-level mapping; (2) a unified initialization strategy coupled with a local–periodic collaborative mapping mechanism to jointly enhance representational capacity and optimization stability; and (3) a theoretical guarantee that the framework incurs no additional computational or memory overhead. Evaluated on the GSM8K benchmark, UniQuanF achieves up to a 4.60% absolute accuracy improvement over state-of-the-art uniform and binary quantization methods, markedly mitigating quantization-induced accuracy loss. The framework thus establishes a new quantization paradigm for LLMs—rigorously grounded in theory and validated in practice—enabling efficient, high-fidelity deployment.

Technology Category

Application Category

📝 Abstract
How can we quantize large language models while preserving accuracy? Quantization is essential for deploying large language models (LLMs) efficiently. Binary-coding quantization (BCQ) and uniform quantization (UQ) are promising quantization schemes that have strong expressiveness and optimizability, respectively. However, neither scheme leverages both advantages. In this paper, we propose UniQuanF (Unified Quantization with Flexible Mapping), an accurate quantization method for LLMs. UniQuanF harnesses both strong expressiveness and optimizability by unifying the flexible mapping technique in UQ and non-uniform quantization levels of BCQ. We propose unified initialization, and local and periodic mapping techniques to optimize the parameters in UniQuanF precisely. After optimization, our unification theorem removes computational and memory overhead, allowing us to utilize the superior accuracy of UniQuanF without extra deployment costs induced by the unification. Experimental results demonstrate that UniQuanF outperforms existing UQ and BCQ methods, achieving up to 4.60% higher accuracy on GSM8K benchmark.
Problem

Research questions and friction points this paper is trying to address.

Unify uniform and binary-coding quantization for LLMs
Preserve model accuracy during quantization
Eliminate extra deployment costs post-optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies uniform and binary-coding quantization techniques
Employs flexible mapping for accurate LLM compression
Eliminates overhead with optimized initialization and mapping
🔎 Similar Papers
No similar papers found.