UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of deploying large language models (LLMs) on memory-constrained and dynamically resourced edge devices, this paper proposes a unified quantization–low-rank co-compression framework enabling adaptive pruning across Transformer, State Space Model (SSM), and hybrid architectures. The method integrates structured weight ranking, quantization-aware singular value decomposition (SVD), state-aware ranking, and fused RoPE kernel optimization, completing all compression steps in a single cloud-based pipeline while enabling configurable pruning ratios on-device. It relies solely on post-training quantization and low-rank decomposition—no fine-tuning is required. Experiments demonstrate up to 4.0–5.7× memory compression, 2.7–3.4× improvement in token throughput, and ≤5% accuracy degradation even under 35% pruning. The framework significantly enhances deployment efficiency and flexibility of LLMs on resource-constrained edge platforms.

Technology Category

Application Category

📝 Abstract
Deploying large language model (LLM) models on mobile platforms faces significant challenges due to the limited memory and shared computational resources of the device. Resource availability may be an issue as it is directly impacted by the current device workload, adding to the uncertainty of model deployment. We introduce UniQL, a unified post-training quantization and low-rank compression framework with on-device configurable pruning rates for edge LLMs. UniQL is a general framework that integrates quantization and low-rank compression for Transformers, State Space Models (SSMs), and hybrid models to support diverse edge applications. In our proposed joint framework, we introduce an efficient structured weight-sorting method that speeds up computation by 20x, quantization-aware singular value decomposition (SVD) to minimize quantization errors, state-aware weight sorting for SSMs, and a fused rotary positional embedding (RoPE) kernel for pruned models. Our framework performs weight-sorting, fine-tuning, and quantization in the cloud in a single-pass workflow, while enabling on-device configurable pruning rates up to 35%. Our experiments show that quantized and pruned models achieve a memory reduction of 4x-5.7x and a token-throughput improvement of 2.7x-3.4x, maintaining accuracy within 5% of the original models at 15% pruning across Transformers (Llama3 and Qwen2.5), SSMs (Mamba2), and hybrid models (Nemotron-H and Bamba-v2). The code and quantized models are available at: https://github.com/enyac-group/UniQL.
Problem

Research questions and friction points this paper is trying to address.

UniQL addresses memory constraints on mobile devices for LLMs.
It adapts to varying device workloads with configurable pruning rates.
The framework integrates quantization and compression for diverse edge models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified post-training quantization and low-rank compression framework
Efficient structured weight-sorting method speeds up computation 20x
Quantization-aware SVD minimizes errors with configurable on-device pruning
🔎 Similar Papers
No similar papers found.