π€ AI Summary
This work addresses the challenge of deploying large language models on edge devices, where memory and computational constraints limit practicality, and existing ternary quantization methods struggle to balance hardware alignment with bit efficiency. The authors propose a structured 1.25-bit ternary quantization scheme that compresses every four weights into five bits via 3:4 fine-grained sparsity. To mitigate weight collapse during training and preserve representational diversity, they introduce an annealed residual synaptic mechanism. Evaluated on LLaMA-3.2, the method achieves state-of-the-art accuracy among ternary quantization approaches, reduces model size by 25%, and accelerates inference by 10% on an Intel i7-14700HX CPUβall without incurring any accuracy loss.
π Abstract
The deployment of Large Language Models (LLMs) on resource-constrained edge devices is increasingly hindered by prohibitive memory and computational requirements. While ternary quantization offers a compelling solution by reducing weights to {-1, 0, +1}, current implementations suffer from a fundamental misalignment with commodity hardware. Most existing methods must choose between 2-bit aligned packing, which incurs significant bit wastage, or 1.67-bit irregular packing, which degrades inference speed. To resolve this tension, we propose Sherry, a hardware-efficient ternary quantization framework. Sherry introduces a 3:4 fine-grained sparsity that achieves a regularized 1.25-bit width by packing blocks of four weights into five bits, restoring power-of-two alignment. Furthermore, we identify weight trapping issue in sparse ternary training, which leads to representational collapse. To address this, Sherry introduces Arenas, an annealing residual synapse mechanism that maintains representational diversity during training. Empirical evaluations on LLaMA-3.2 across five benchmarks demonstrate that Sherry matches state-of-the-art ternary performance while significantly reducing model size. Notably, on an Intel i7-14700HX CPU, our 1B model achieves zero accuracy loss compared to SOTA baselines while providing 25% bit savings and 10% speed up. The code is available at https://github.com/Tencent/AngelSlim .