BitNet v2: Native 4-bit Activations with Hadamard Transformation for 1-bit LLMs

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the instability in training 1-bit large language models (LLMs) caused by activation outliers under ultra-low-bit quantization, this paper introduces the first efficient training framework natively supporting 4-bit activations. The core innovation is the H-BitLinear module, which employs online Hadamard transforms to smooth activation distributions and integrates joint 4-bit/1-bit weight quantization with low-bit optimization strategies for end-to-end stable training. Experiments demonstrate that our method achieves near-lossless performance under native 4-bit activations, with substantial reductions in memory footprint and computational cost; its 8-bit activation variant matches the performance of BitNet b1.58. This work provides the first empirical validation that higher-precision activation quantization is critical for stabilizing and accelerating 1-bit LLM training, establishing a new paradigm for deploying ultra-low-bit large models.

Technology Category

Application Category

📝 Abstract
Efficient deployment of 1-bit Large Language Models (LLMs) is hindered by activation outliers, which complicate quantization to low bit-widths. We introduce BitNet v2, a novel framework enabling native 4-bit activation quantization for 1-bit LLMs. To tackle outliers in attention and feed-forward network activations, we propose H-BitLinear, a module applying an online Hadamard transformation prior to activation quantization. This transformation smooths sharp activation distributions into more Gaussian-like forms, suitable for low-bit representation. Experiments show BitNet v2 trained from scratch with 8-bit activations matches BitNet b1.58 performance. Crucially, BitNet v2 achieves minimal performance degradation when trained with native 4-bit activations, significantly reducing memory footprint and computational cost for batched inference.
Problem

Research questions and friction points this paper is trying to address.

Address activation outliers in 1-bit LLMs
Enable native 4-bit activation quantization
Reduce memory and computational costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Native 4-bit activation quantization for 1-bit LLMs
H-BitLinear module with online Hadamard transformation
Reduces memory and computational cost for inference
🔎 Similar Papers
No similar papers found.