Exploring Model Invariance with Discrete Search for Ultra-Low-Bit Quantization

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing post-training quantization methods for large language models (LLMs) suffer from severe accuracy degradation under ultra-low-bit regimes (e.g., 2-bit). Method: This paper proposes the first unified quantization framework that jointly models multiple model invariances—specifically, weight permutation invariance, scaling invariance, and channel reordering invariance. It introduces a novel discrete search algorithm to systematically explore weight permutations, overcoming the limitations of gradient-based optimization in capturing such structural invariances. Contribution/Results: The method significantly improves inference accuracy of mainstream LLMs (e.g., Llama-2/3, Qwen) under 2-bit quantization, achieving an average +15.2% gain on benchmarks including Winogrande and HellaSwag. It is fully compatible with state-of-the-art quantization techniques, delivering consistent performance gains. This work establishes a scalable new paradigm for ultra-low-bit LLM deployment.

Technology Category

Application Category

📝 Abstract
Large language models have been increasing in size due to their success in a wide range of applications. This calls for a pressing need to reduce memory usage to make them more accessible. Post-training quantization is a popular technique which uses fewer bits (e.g., 4--8 bits) to represent the model without retraining it. However, it remains a challenging task to perform quantization in an ultra-low-bit setup (e.g., 2 bits). In this paper, we propose InvarExplore, a unified framework that systematically explores different model invariance at the same time, allowing us to take advantage of the synergy between each type of invariance. Importantly, InvarExplore features a discrete search algorithm that enables us to explore permutation invariance, which is under-studied as it cannot be optimized with gradient-based methods. Results show that InvarExplore is compatible with existing state-of-the-art methods, achieving an add-on performance improvement over strong competing methods.
Problem

Research questions and friction points this paper is trying to address.

Reducing memory usage in large language models
Enhancing ultra-low-bit quantization techniques
Exploring permutation invariance with discrete search
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discrete search algorithm
Explores model invariance
Ultra-low-bit quantization framework
🔎 Similar Papers
No similar papers found.