Sherry: Hardware-Efficient 1.25-Bit Ternary Quantization via Fine-grained Sparsification

πŸ“… 2026-01-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of deploying large language models on edge devices, where memory and computational constraints limit practicality, and existing ternary quantization methods struggle to balance hardware alignment with bit efficiency. The authors propose a structured 1.25-bit ternary quantization scheme that compresses every four weights into five bits via 3:4 fine-grained sparsity. To mitigate weight collapse during training and preserve representational diversity, they introduce an annealed residual synaptic mechanism. Evaluated on LLaMA-3.2, the method achieves state-of-the-art accuracy among ternary quantization approaches, reduces model size by 25%, and accelerates inference by 10% on an Intel i7-14700HX CPUβ€”all without incurring any accuracy loss.

Technology Category

Application Category

πŸ“ Abstract
The deployment of Large Language Models (LLMs) on resource-constrained edge devices is increasingly hindered by prohibitive memory and computational requirements. While ternary quantization offers a compelling solution by reducing weights to {-1, 0, +1}, current implementations suffer from a fundamental misalignment with commodity hardware. Most existing methods must choose between 2-bit aligned packing, which incurs significant bit wastage, or 1.67-bit irregular packing, which degrades inference speed. To resolve this tension, we propose Sherry, a hardware-efficient ternary quantization framework. Sherry introduces a 3:4 fine-grained sparsity that achieves a regularized 1.25-bit width by packing blocks of four weights into five bits, restoring power-of-two alignment. Furthermore, we identify weight trapping issue in sparse ternary training, which leads to representational collapse. To address this, Sherry introduces Arenas, an annealing residual synapse mechanism that maintains representational diversity during training. Empirical evaluations on LLaMA-3.2 across five benchmarks demonstrate that Sherry matches state-of-the-art ternary performance while significantly reducing model size. Notably, on an Intel i7-14700HX CPU, our 1B model achieves zero accuracy loss compared to SOTA baselines while providing 25% bit savings and 10% speed up. The code is available at https://github.com/Tencent/AngelSlim .
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Ternary Quantization
Hardware Efficiency
Model Compression
Edge Deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

ternary quantization
fine-grained sparsification
hardware-efficient
1.25-bit representation
annealing residual synapse
πŸ”Ž Similar Papers
No similar papers found.
Hong Huang
Hong Huang
Associate Professor, Huazhong University of Science and Technology
data miningbig data analysis
D
Decheng Wu
Tencent
Q
Qiangqiang Hu
Tencent
G
Guanghua Yu
Tencent
J
Jinhai Yang
City University of Hong Kong
J
Jianchen Zhu
Tencent
X
Xue Liu
McGill University
Dapeng Wu
Dapeng Wu
Chongqing University of Posts and Telecommunications
Wireless NetworkSocial Computing