SLMQuant:Benchmarking Small Language Model Quantization for Practical Deployment

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Small language models (SLMs) suffer from low quantization efficiency on edge devices, and existing large language model (LLM) quantization methods transfer poorly to SLMs. Method: We introduce SLMQuant—the first systematic quantization benchmark for SLMs—covering diverse architectures, tasks, and quantization strategies (weight-only, activation-aware, and mixed-precision). Through empirical analysis, we identify SLMs’ higher quantization sensitivity and distinct bottlenecks compared to LLMs, leading to a set of SLM-specific compression design principles. Contribution/Results: Directly applying LLM quantization schemes severely degrades SLM performance. SLMQuant fills a critical evaluation gap and provides reproducible, task-aware optimization guidance. Experiments show it improves edge deployment efficiency by up to 2.3× while preserving accuracy, establishing a principled foundation for efficient SLM deployment in resource-constrained environments.

Technology Category

Application Category

📝 Abstract
Despite the growing interest in Small Language Models (SLMs) as resource-efficient alternatives to Large Language Models (LLMs), their deployment on edge devices remains challenging due to unresolved efficiency gaps in model compression. While quantization has proven effective for LLMs, its applicability to SLMs is significantly underexplored, with critical questions about differing quantization bottlenecks and efficiency profiles. This paper introduces SLMQuant, the first systematic benchmark for evaluating LLM compression techniques when applied to SLMs. Through comprehensive multi-track evaluations across diverse architectures and tasks, we analyze how state-of-the-art quantization methods perform on SLMs. Our findings reveal fundamental disparities between SLMs and LLMs in quantization sensitivity, demonstrating that direct transfer of LLM-optimized techniques leads to suboptimal results due to SLMs' unique architectural characteristics and training dynamics. We identify key factors governing effective SLM quantization and propose actionable design principles for SLM-tailored compression. SLMQuant establishes a foundational framework for advancing efficient SLM deployment on low-end devices in edge applications, and provides critical insights for deploying lightweight language models in resource-constrained scenarios.
Problem

Research questions and friction points this paper is trying to address.

Evaluating quantization effectiveness for Small Language Models on edge devices
Identifying unique quantization bottlenecks in SLMs versus LLMs
Establishing tailored compression principles for efficient SLM deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic benchmark for SLM quantization evaluation
Identifies unique SLM quantization sensitivity disparities
Proposes SLM-tailored compression design principles
🔎 Similar Papers
No similar papers found.
Jiacheng Wang
Jiacheng Wang
Nanyang Technological University
ISACGenAILow-altitude wireless networkSemantic Communications
Y
Yejun Zeng
School of Artificial Intelligence, Beihang University
Jinyang Guo
Jinyang Guo
The University of Sydney
Deep LearningEfficient MethodsEdge Computing
Y
Yuqing Ma
School of Artificial Intelligence, Beihang University
A
Aishan Liu
School of Computer Science and Engineering, Beihang University
X
Xianglong Liu
School of Computer Science and Engineering, Beihang University