🤖 AI Summary
Deploying large language models (LLMs) on edge devices is hindered by insufficient memory elasticity—arising from dynamic shared memory constraints, tight storage budgets, and the coarse-grained precision-resource trade-offs of existing quantization methods. To address this, we propose a dynamic quantization framework tailored for volatile unified memory architectures. Our approach introduces a novel quantization family generation mechanism, enabling 15× finer-grained precision control and 10× model storage compression. It jointly integrates structured pruning, multi-precision quantization, and elastic loading scheduling, ensuring compatibility with mainstream quantization paradigms. Experiments demonstrate that, under stringent storage limits, our framework significantly improves inference quality and energy efficiency while enabling continuous, configurable trade-offs between resource consumption and accuracy. This provides an efficient, flexible, memory-adaptive solution for on-device LLM deployment.
📝 Abstract
Deploying LLMs on edge devices presents serious technical challenges. Memory elasticity is crucial for edge devices with unified memory, where memory is shared and fluctuates dynamically. Existing solutions suffer from either poor transition granularity or high storage costs. We propose FlexQuant, a novel elasticity framework that generates an ensemble of quantized models, providing an elastic hosting solution with 15x granularity improvement and 10x storage reduction compared to SoTA methods. FlexQuant works with most quantization methods and creates a family of trade-off options under various storage limits through our pruning method. It brings great performance and flexibility to the edge deployment of LLMs.