On-the-Fly Adaptation to Quantization: Configuration-Aware LoRA for Efficient Fine-Tuning of Quantized LLMs

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inefficiency of repeatedly fine-tuning quantized large language models (LLMs) for distinct bit-width configurations on edge devices, this work proposes a training-free dynamic adaptation framework. Methodologically, it introduces a configuration-aware mechanism that leverages Pareto-optimal frontier search to construct a high-quality set of quantization configurations, and designs a quantization configuration mapping network to dynamically steer LoRA adapters for real-time layer-wise heterogeneous bit-width assignment. The core contribution lies in enabling cross-configuration knowledge transfer and zero-overhead adaptive adjustment without retraining. Experiments demonstrate that, without any additional fine-tuning, our method matches or surpasses dedicated fine-tuned baselines in accuracy, significantly enhancing deployment flexibility and efficiency of quantized LLMs on resource-constrained edge devices.

Technology Category

Application Category

📝 Abstract
As increasingly large pre-trained models are released, deploying them on edge devices for privacy-preserving applications requires effective compression. Recent works combine quantization with the fine-tuning of high-precision LoRA adapters, which can substantially reduce model size while mitigating the accuracy loss from quantization. However, edge devices have inherently heterogeneous capabilities, while performing configuration-wise fine-tuning for every quantization setting is computationally prohibitive. In this paper, we propose CoA-LoRA, a method that dynamically adjusts the LoRA adapter to arbitrary quantization configurations (i.e., the per-layer bit-width choices of a pre-trained model) without requiring repeated fine-tuning. This is accomplished via a configuration-aware model that maps each configuration to its low-rank adjustments. The effectiveness of this model critically depends on the training configuration set, a collection of configurations chosen to cover different total bit-width budgets. However, constructing a high-quality configuration set is non-trivial. We therefore design a Pareto-based configuration search that iteratively optimizes the training configuration set, yielding more precise low-rank adjustments. Our experiments demonstrate that, unlike the state-of-the-art methods that require fine-tuning a separate LoRA adapter for each configuration, CoA-LoRA incurs no additional time cost while achieving comparable or even superior performance to those methods.
Problem

Research questions and friction points this paper is trying to address.

Dynamically adapting LoRA to arbitrary quantization settings without retraining
Overcoming computational cost of per-configuration fine-tuning for quantized LLMs
Enabling efficient deployment of compressed models on heterogeneous edge devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic LoRA adaptation for arbitrary quantization configurations
Configuration-aware model mapping settings to adjustments
Pareto-based search optimizing training configuration sets
🔎 Similar Papers
No similar papers found.