APreQEL: Adaptive Mixed Precision Quantization For Edge LLMs

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
📝 Abstract
Today, large language models have demonstrated their strengths in various tasks ranging from reasoning, code generation, and complex problem solving. However, this advancement comes with a high computational cost and memory requirements, making it challenging to deploy these models on edge devices to ensure real-time responses and data privacy. Quantization is one common approach to reducing memory use, but most methods apply it uniformly across all layers. This does not account for the fact that different layers may respond differently to reduced precision. Importantly, memory consumption and computational throughput are not necessarily aligned, further complicating deployment decisions. This paper proposes an adaptive mixed precision quantization mechanism that balances memory, latency, and accuracy in edge deployment under user-defined priorities. This is achieved by analyzing the layer-wise contribution and by inferring how different quantization types behave across the target hardware platform in order to assign the most suitable quantization type to each layer. This integration ensures that layer importance and the overall performance trade-offs are jointly respected in this design. Our work unlocks new configuration designs that uniform quantization cannot achieve, expanding the solution space to efficiently deploy the LLMs on resource-constrained devices.
Problem

Research questions and friction points this paper is trying to address.

edge deployment
large language models
mixed precision quantization
memory-latency trade-off
layer-wise sensitivity
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive mixed precision quantization
edge LLMs
layer-wise contribution analysis
hardware-aware quantization
memory-latency-accuracy trade-off