HBLLM: Wavelet-Enhanced High-Fidelity 1-Bit Quantization for LLMs

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the severe fidelity degradation in large language models (LLMs) under 1-bit quantization, this paper proposes HBLLM—a high-fidelity 1-bit quantization method based on Haar wavelet transform. Our approach decomposes weight matrices into frequency subbands to enhance representational capacity at ultra-low bitwidths. We introduce a frequency-aware intra-group partitioning strategy and an ℓ₂-norm-driven saliency column selection mechanism to preserve critical weights. Within each subband, non-salient weights are quantized via mean-sharing, enabling structural awareness and efficient compression. Evaluated on OPT and LLaMA architectures, HBLLM achieves state-of-the-art performance: a perplexity of 6.71 on LLaMA2-13B and an average weight storage of only 1.08 bits—substantially outperforming existing 1-bit quantization methods while maintaining model accuracy.

Technology Category

Application Category

📝 Abstract
We introduce HBLLM, a wavelet-enhanced high-fidelity $1$-bit post-training quantization method for Large Language Models (LLMs). By leveraging Haar wavelet transforms to enhance expressive capacity through frequency decomposition, HBLLM significantly improves quantization fidelity while maintaining minimal overhead. This approach features two innovative structure-aware grouping strategies: (1) frequency-aware multi-parameter intra-row grouping and (2) $ell_2$-norm-based saliency-driven column selection. For non-salient weights, a shared mean is employed across quantization groups within each frequency band to optimize storage efficiency. Experiments conducted on the OPT and LLaMA models demonstrate that HBLLM achieves state-of-the-art performance in $1$-bit quantization, attaining a perplexity of $6.71$ on LLaMA$2$-$13$B with an average weight storage of only $1.08$ bits. Code available at: https://github.com/Yeyke/HBLLM.
Problem

Research questions and friction points this paper is trying to address.

Enhances LLM quantization fidelity using wavelet transforms.
Introduces structure-aware grouping for efficient 1-bit compression.
Reduces storage to 1.08 bits while maintaining performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wavelet-enhanced 1-bit quantization for LLMs
Frequency-aware grouping and saliency-driven column selection
Shared mean optimization for storage efficiency
🔎 Similar Papers
No similar papers found.
N
Ningning Chen
Sun Yat-sen University
Weicai Ye
Weicai Ye
Kling Team, Kuaishou Technology
Multimodal Generative Foundation ModelsWorld Model3D VisionEmbodied AIAGI
Y
Ying Jiang
Sun Yat-sen University