AngelSlim: A more accessible, comprehensive, and efficient toolkit for large model compression

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of a unified and efficient toolchain for large model compression and industrial deployment, particularly in ultra-low-bit quantization, long-context inference acceleration, and multimodal compression. To this end, we propose AngelSlim, a comprehensive toolkit integrating several innovations: FP8/INT8 post-training quantization, 2-bit ultra-low-bit quantization, training-aligned speculative decoding, hybrid static-dynamic sparse attention, IDPruner for vision token pruning, and Samp for adaptive audio token merging. Using this framework, we achieve the first industrial-grade 2-bit large language model, HY-1.8B-int2, which significantly reduces first-token latency, improves inference throughput by 1.8–2.0×, and maintains output correctness—thereby advancing the practical deployment of highly compressed large models.

Technology Category

Application Category

📝 Abstract
This technical report introduces AngelSlim, a comprehensive and versatile toolkit for large model compression developed by the Tencent Hunyuan team. By consolidating cutting-edge algorithms, including quantization, speculative decoding, token pruning, and distillation. AngelSlim provides a unified pipeline that streamlines the transition from model compression to industrial-scale deployment. To facilitate efficient acceleration, we integrate state-of-the-art FP8 and INT8 Post-Training Quantization (PTQ) algorithms alongside pioneering research in ultra-low-bit regimes, featuring HY-1.8B-int2 as the first industrially viable 2-bit large model. Beyond quantization, we propose a training-aligned speculative decoding framework compatible with multimodal architectures and modern inference engines, achieving 1.8x to 2.0x throughput gains without compromising output correctness. Furthermore, we develop a training-free sparse attention framework that reduces Time-to-First-Token (TTFT) in long-context scenarios by decoupling sparse kernels from model architectures through a hybrid of static patterns and dynamic token selection. For multimodal models, AngelSlim incorporates specialized pruning strategies, namely IDPruner for optimizing vision tokens via Maximal Marginal Relevance and Samp for adaptive audio token merging and pruning. By integrating these compression strategies from low-level implementations, AngelSlim enables algorithm-focused research and tool-assisted deployment.
Problem

Research questions and friction points this paper is trying to address.

large model compression
industrial deployment
efficient inference
multimodal models
resource-constrained acceleration
Innovation

Methods, ideas, or system contributions that make the work stand out.

model compression
post-training quantization
speculative decoding
sparse attention
multimodal pruning
🔎 Similar Papers
No similar papers found.
R
Rui Cen
Q
QiangQiang Hu
H
Hong Huang
H
Hong Liu
S
Song Liu
Xin Luo
Xin Luo
University of Science and Technology of China
Computer Vision
L
Lin Niu
Y
Yifan Tan
D
Decheng Wu
L
Linchuan Xie
Rubing Yang
Rubing Yang
University of Pennsylvania
Deep learningMachine perception
G
Guanghua Yu
J
Jianchen Zhu