🤖 AI Summary
This work addresses the challenges of deploying small language models on edge devices, where memory constraints, latency, energy consumption, and noise in non-volatile memory severely limit performance. Conventional quantization and storage architectures struggle to balance efficiency and accuracy under these conditions. The authors propose a retraining-free, outlier-aware quantization method coupled with the first co-designed ReRAM/MRAM hybrid memory architecture: regular weights are stored in high-density multi-level ReRAM, while critical outliers are retained in high-precision MRAM, alongside optimized KV caching. On edge AI platforms, this approach reduces memory footprint by 6.3–7.3×, external data transfers by 7.6×, and energy and latency by 11.7× and 12.5×, respectively, compared to FP16. It effectively mitigates non-volatile memory noise while matching or surpassing state-of-the-art quantization methods in both accuracy and performance.
📝 Abstract
Deploying Small Language Models (SLMs) on edge platforms is critical for real-time, privacy-sensitive generative AI, yet constrained by memory, latency, and energy budgets. Quantization reduces model size and cost but suffers from device noise in emerging non-volatile memories, while conventional memory hierarchies further limit efficiency. SRAM provides fast access but has low density, DRAM must simultaneously accommodate static weights and dynamic KV caches, which creates bandwidth contention, and Flash, although dense, is primarily used for initialization and remains inactive during inference. These limitations highlight the need for hybrid memory organizations tailored to LLM inference. We propose Outlier-aware Quantization with Memory Co-design (QMC), a retraining-free quantization with a novel heterogeneous memory architecture. QMC identifies inlier and outlier weights in SLMs, storing inlier weights in compact multi-level Resistive-RAM (ReRAM) while preserving critical outliers in high-precision on-chip Magnetoresistive-RAM (MRAM), mitigating noise-induced degradation. On language modeling and reasoning benchmarks, QMC outperforms and matches state-of-the-art quantization methods using advanced algorithms and hybrid data formats, while achieving greater compression under both algorithm-only evaluation and realistic deployment settings. Specifically, compared against SoTA quantization methods on the latest edge AI platform, QMC reduces memory usage by 6.3x-7.3x, external data transfers by 7.6x, energy by 11.7x, and latency by 12.5x when compared to FP16, establishing QMC as a scalable, deployment-ready co-design for efficient on-device inference.