🤖 AI Summary
Small language models (SLMs) suffer from low quantization efficiency on edge devices, and existing large language model (LLM) quantization methods transfer poorly to SLMs. Method: We introduce SLMQuant—the first systematic quantization benchmark for SLMs—covering diverse architectures, tasks, and quantization strategies (weight-only, activation-aware, and mixed-precision). Through empirical analysis, we identify SLMs’ higher quantization sensitivity and distinct bottlenecks compared to LLMs, leading to a set of SLM-specific compression design principles. Contribution/Results: Directly applying LLM quantization schemes severely degrades SLM performance. SLMQuant fills a critical evaluation gap and provides reproducible, task-aware optimization guidance. Experiments show it improves edge deployment efficiency by up to 2.3× while preserving accuracy, establishing a principled foundation for efficient SLM deployment in resource-constrained environments.
📝 Abstract
Despite the growing interest in Small Language Models (SLMs) as resource-efficient alternatives to Large Language Models (LLMs), their deployment on edge devices remains challenging due to unresolved efficiency gaps in model compression. While quantization has proven effective for LLMs, its applicability to SLMs is significantly underexplored, with critical questions about differing quantization bottlenecks and efficiency profiles. This paper introduces SLMQuant, the first systematic benchmark for evaluating LLM compression techniques when applied to SLMs. Through comprehensive multi-track evaluations across diverse architectures and tasks, we analyze how state-of-the-art quantization methods perform on SLMs. Our findings reveal fundamental disparities between SLMs and LLMs in quantization sensitivity, demonstrating that direct transfer of LLM-optimized techniques leads to suboptimal results due to SLMs' unique architectural characteristics and training dynamics. We identify key factors governing effective SLM quantization and propose actionable design principles for SLM-tailored compression. SLMQuant establishes a foundational framework for advancing efficient SLM deployment on low-end devices in edge applications, and provides critical insights for deploying lightweight language models in resource-constrained scenarios.